[jira] [Work logged] (HDDS-1535) Space tracking for Open Containers : Handle Node Startup

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1535?focusedWorklogId=247287=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247287
 ]

ASF GitHub Bot logged work on HDDS-1535:


Author: ASF GitHub Bot
Created on: 23/May/19 05:27
Start Date: 23/May/19 05:27
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #832: HDDS-1535. Space tracking 
for Open Containers : Handle Node Startup. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/832#issuecomment-495072551
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247287)
Time Spent: 1h 20m  (was: 1h 10m)

> Space tracking for Open Containers : Handle Node Startup
> 
>
> Key: HDDS-1535
> URL: https://issues.apache.org/jira/browse/HDDS-1535
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This is related to HDDS-1511
> Space tracking for Open Containers (committed space in the volume) relies on 
> usedBytes in the Container state. usedBytes is not persisted for every update 
> (chunkWrite). So on a node restart the value is stale.
> The proposal is to:
> iterate the block DB for each open container during startup and compute the 
> used space.
> The block DB process will be accelerated by spawning executors for each 
> container.
> This process will be carried out as part of building the container set during 
> startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846429#comment-16846429
 ] 

Hadoop QA commented on HDFS-10210:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-10210 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10210 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795305/HDFS-10210.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26821/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2019-05-22 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846428#comment-16846428
 ] 

Akira Ajisaka commented on HDFS-10210:
--

Hi [~jojochuang], would you rebase the patch?

> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2019-05-22 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10210:
-
Target Version/s: 3.3.0

> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=247284=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247284
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 23/May/19 05:04
Start Date: 23/May/19 05:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#issuecomment-495068305
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 612 | trunk passed |
   | +1 | compile | 327 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 989 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 387 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 637 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | -1 | mvninstall | 126 | hadoop-hdds in the patch failed. |
   | -1 | compile | 78 | hadoop-hdds in the patch failed. |
   | -1 | javac | 78 | hadoop-hdds in the patch failed. |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | the patch passed |
   | -1 | findbugs | 127 | hadoop-hdds in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 139 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1152 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 6639 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/846 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 62fb57d2ccc7 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 03aa70f |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/artifact/out/patch-findbugs-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/testReport/ |
   | Max. process+thread count | 4182 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, 

[jira] [Work logged] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?focusedWorklogId=247271=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247271
 ]

ASF GitHub Bot logged work on HDDS-1530:


Author: ASF GitHub Bot
Created on: 23/May/19 04:01
Start Date: 23/May/19 04:01
Worklog Time Spent: 10m 
  Work Description: iamcaoxudong commented on pull request #830: HDDS-1530. 
Freon support big files larger than 2GB and add --bufferSize and 
--validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r285984744
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -228,8 +243,20 @@ public Void call() throws Exception {
   init(freon.createOzoneConfiguration());
 }
 
-keyValue =
-DFSUtil.string2Bytes(RandomStringUtils.randomAscii(keySize - 36));
+keyValueBuffer = DFSUtil.string2Bytes(
+RandomStringUtils.randomAscii(bufferSize));
+
+// Compute the common initial digest for all keys without their UUID
+if (validateWrites) {
+  commonInitialMD = DigestUtils.getDigest(DIGEST_ALGORITHM);
+  int uuidLength = UUID.randomUUID().toString().length();
+  keySize = Math.max(uuidLength, keySize);
+  for (long nrRemaining = keySize - uuidLength; nrRemaining > 0;
+  nrRemaining -= bufferSize) {
+int curSize = (int)Math.min(bufferSize, nrRemaining);
+commonInitialMD.update(keyValueBuffer, 0, curSize);
 
 Review comment:
   Thank you for reviewing, but according to my test results, repeatedly 
calculate the same content will change the digest, I test it by following 
statement looply:
   
   print ((MessageDigest)(commonInitialMD.clone())).digest()
   
   The reason why we need to clone every time is that digest() is not an 
idempotent operation, it will change the buffer of MessageDigest object (I 
think it may be mainly due to padding)。
   
   An example result:
   117 13 74 44 36 -116 -59 -123 -23 -10 -40 47 -19 -98 67 121 
   25 -69 53 -84 -126 -88 63 -30 -2 71 107 -62 57 -117 -32 115 
   81 -58 -108 -23 20 92 90 31 3 87 -34 -56 -71 -115 -107 -124 
   -108 110 -70 -54 3 -62 -64 -25 -20 -100 45 2 -95 -10 -35 -28 
   45 5 -43 46 -70 46 1 14 -46 60 -33 14 -67 59 -10 -13 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247271)
Time Spent: 3.5h  (was: 3h 20m)

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?focusedWorklogId=247269=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247269
 ]

ASF GitHub Bot logged work on HDDS-1530:


Author: ASF GitHub Bot
Created on: 23/May/19 04:00
Start Date: 23/May/19 04:00
Worklog Time Spent: 10m 
  Work Description: iamcaoxudong commented on pull request #830: HDDS-1530. 
Freon support big files larger than 2GB and add --bufferSize and 
--validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r285984744
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -228,8 +243,20 @@ public Void call() throws Exception {
   init(freon.createOzoneConfiguration());
 }
 
-keyValue =
-DFSUtil.string2Bytes(RandomStringUtils.randomAscii(keySize - 36));
+keyValueBuffer = DFSUtil.string2Bytes(
+RandomStringUtils.randomAscii(bufferSize));
+
+// Compute the common initial digest for all keys without their UUID
+if (validateWrites) {
+  commonInitialMD = DigestUtils.getDigest(DIGEST_ALGORITHM);
+  int uuidLength = UUID.randomUUID().toString().length();
+  keySize = Math.max(uuidLength, keySize);
+  for (long nrRemaining = keySize - uuidLength; nrRemaining > 0;
+  nrRemaining -= bufferSize) {
+int curSize = (int)Math.min(bufferSize, nrRemaining);
+commonInitialMD.update(keyValueBuffer, 0, curSize);
 
 Review comment:
   Thank you for reviewing, but according to my test results, repeatedly 
calculate the same content will change the digest, I test it by following 
statement looply:
   
   print ((MessageDigest)(commonInitialMD.clone())).digest()
   
   The reason why we need to clone every time is that digest() is not an 
idempotent operation, it will change the buffer of MessageDigest object (I 
think it may be mainly due to padding)。
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247269)
Time Spent: 3h 20m  (was: 3h 10m)

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1555:
-
Labels: MiniOzoneChaosCluster pull-request-available  (was: 
MiniOzoneChaosCluster)

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=247261=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247261
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 23/May/19 03:11
Start Date: 23/May/19 03:11
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846
 
 
   pom change is needed to get the latest RATIS master and get RATIS-564 
changes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247261)
Time Spent: 10m
Remaining Estimate: 0h

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14494) Move Server logging of StatedId inside receiveRequestState()

2019-05-22 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14494:

Status: Patch Available  (was: Open)

> Move Server logging of StatedId inside receiveRequestState()
> 
>
> Key: HDFS-14494
> URL: https://issues.apache.org/jira/browse/HDFS-14494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Konstantin Shvachko
>Assignee: Shweta
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-14494.001.patch
>
>
> HDFS-14270 introduced logging of the client and server StateIds in trace 
> level. Unfortunately one of the arguments 
> {{alignmentContext.getLastSeenStateId()}} holds a lock on FSEdits, which is 
> called even if trace logging level is disabled. I propose to move logging 
> message inside {{GlobalStateIdContext.receiveRequestState()}} where 
> {{clientStateId}} and {{serverStateId}} already calculated and can be easily 
> printed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-22 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846397#comment-16846397
 ] 

Ayush Saxena commented on HDFS-13955:
-

Thanx [~crh] for the patch.
v003 LGTM +1

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=247257=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247257
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 23/May/19 02:47
Start Date: 23/May/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-495046975
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 519 | trunk passed |
   | +1 | compile | 279 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 826 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   | 0 | spotbugs | 283 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 472 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 459 | the patch passed |
   | +1 | compile | 254 | the patch passed |
   | +1 | cc | 254 | the patch passed |
   | +1 | javac | 254 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 610 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 83 | hadoop-ozone generated 1 new + 2 unchanged - 0 fixed = 
3 total (was 2) |
   | -1 | findbugs | 212 | hadoop-hdds generated 3 new + 0 unchanged - 0 fixed 
= 3 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 150 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1100 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 9095 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Synchronization performed on java.util.concurrent.ConcurrentHashMap in 
org.apache.hadoop.utils.db.cache.PartialTableCache.evictCache(long)  At 
PartialTableCache.java:org.apache.hadoop.utils.db.cache.PartialTableCache.evictCache(long)
  At PartialTableCache.java:[line 90] |
   |  |  Synchronization performed on java.util.concurrent.ConcurrentHashMap in 
org.apache.hadoop.utils.db.cache.PartialTableCache.get(CacheKey)  At 
PartialTableCache.java:org.apache.hadoop.utils.db.cache.PartialTableCache.get(CacheKey)
  At PartialTableCache.java:[line 61] |
   |  |  Synchronization performed on java.util.concurrent.ConcurrentHashMap in 
org.apache.hadoop.utils.db.cache.PartialTableCache.put(CacheKey, CacheValue)  
At 
PartialTableCache.java:org.apache.hadoop.utils.db.cache.PartialTableCache.put(CacheKey,
 CacheValue)  At PartialTableCache.java:[line 68] |
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/827 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux b6c070e2d924 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9c61494 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/7/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/7/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/7/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/7/testReport/ |
   | Max. 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247246=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247246
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 02:32
Start Date: 23/May/19 02:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r286755507
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
 ##
 @@ -43,467 +41,334 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
-import java.util.concurrent.ExecutionException;
 
 /**
  * An {@link InputStream} used by the REST service in combination with the
  * SCMClient to read the value of a key from a sequence
  * of container chunks.  All bytes of the key value are stored in container
- * chunks.  Each chunk may contain multiple underlying {@link ByteBuffer}
+ * chunks. Each chunk may contain multiple underlying {@link ByteBuffer}
  * instances.  This class encapsulates all state management for iterating
- * through the sequence of chunks and the sequence of buffers within each 
chunk.
+ * through the sequence of chunks through {@link ChunkInputStream}.
  */
 public class BlockInputStream extends InputStream implements Seekable {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(BlockInputStream.class);
+
   private static final int EOF = -1;
 
   private final BlockID blockID;
+  private final long length;
+  private Pipeline pipeline;
+  private final long containerKey;
+  private final Token token;
+  private final boolean verifyChecksum;
   private final String traceID;
   private XceiverClientManager xceiverClientManager;
   private XceiverClientSpi xceiverClient;
-  private List chunks;
-  // ChunkIndex points to the index current chunk in the buffers or the the
-  // index of chunk which will be read next into the buffers in
-  // readChunkFromContainer().
+  private boolean initialized = false;
+
+  // List of ChunkInputStreams, one for each chunk in the block
+  private List chunkStreams;
+
+  // chunkOffsets[i] stores the index of the first data byte in
+  // chunkStream i w.r.t the block data.
+  // Let’s say we have chunk size as 40 bytes. And let's say the parent
+  // block stores data from index 200 and has length 400.
+  // The first 40 bytes of this block will be stored in chunk[0], next 40 in
+  // chunk[1] and so on. But since the chunkOffsets are w.r.t the block only
+  // and not the key, the values in chunkOffsets will be [0, 40, 80,].
+  private long[] chunkOffsets = null;
+
+  // Index of the chunkStream corresponding to the current postion of the
+  // BlockInputStream i.e offset of the data to be read next from this block
   private int chunkIndex;
-  // ChunkIndexOfCurrentBuffer points to the index of chunk read into the
-  // buffers or index of the last chunk in the buffers. It is updated only
-  // when a new chunk is read from container into the buffers.
-  private int chunkIndexOfCurrentBuffer;
-  private long[] chunkOffset;
-  private List buffers;
-  private int bufferIndex;
-  private long bufferPosition;
-  private final boolean verifyChecksum;
 
-  /**
-   * Creates a new BlockInputStream.
-   *
-   * @param blockID block ID of the chunk
-   * @param xceiverClientManager client manager that controls client
-   * @param xceiverClient client to perform container calls
-   * @param chunks list of chunks to read
-   * @param traceID container protocol call traceID
-   * @param verifyChecksum verify checksum
-   * @param initialPosition the initial position of the stream pointer. This
-   *position is seeked now if the up-stream was seeked
-   *before this was created.
-   */
-  public BlockInputStream(
-  BlockID blockID, XceiverClientManager xceiverClientManager,
-  XceiverClientSpi xceiverClient, List chunks, String traceID,
-  boolean verifyChecksum, long initialPosition) throws IOException {
-this.blockID = blockID;
-this.traceID = traceID;
-this.xceiverClientManager = xceiverClientManager;
-this.xceiverClient = xceiverClient;
-this.chunks = chunks;
-this.chunkIndex = 0;
-this.chunkIndexOfCurrentBuffer = -1;
-// chunkOffset[i] stores offset at which chunk i stores data in
-// BlockInputStream
-this.chunkOffset = new long[this.chunks.size()];
-initializeChunkOffset();
-this.buffers = null;
-this.bufferIndex = 0;
-this.bufferPosition = -1;
+  // Position of the BlockInputStream is maintainted by this variable till
+  // the stream is initialized. This postion is w.r.t to the block only and
+  // not the key.
+  // For the above 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247248=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247248
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 02:32
Start Date: 23/May/19 02:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r286755511
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -0,0 +1,531 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
+ * encapsulates all state management for iterating through the sequence of
+ * buffers within each chunk.
+ */
+public class ChunkInputStream extends InputStream implements Seekable {
+
+  private final ChunkInfo chunkInfo;
+  private final long length;
+  private final BlockID blockID;
+  private final String traceID;
+  private XceiverClientSpi xceiverClient;
+  private final boolean verifyChecksum;
+  private boolean allocated = false;
+
+  // Buffer to store the chunk data read from the DN container
+  private List buffers;
+
+  // Index of the buffers corresponding to the current postion of the buffers
+  private int bufferIndex;
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247248)
Time Spent: 2.5h  (was: 2h 20m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247245=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247245
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 02:32
Start Date: 23/May/19 02:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r286755519
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -0,0 +1,531 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
+ * encapsulates all state management for iterating through the sequence of
+ * buffers within each chunk.
+ */
+public class ChunkInputStream extends InputStream implements Seekable {
+
+  private final ChunkInfo chunkInfo;
+  private final long length;
+  private final BlockID blockID;
+  private final String traceID;
+  private XceiverClientSpi xceiverClient;
+  private final boolean verifyChecksum;
+  private boolean allocated = false;
+
+  // Buffer to store the chunk data read from the DN container
+  private List buffers;
+
+  // Index of the buffers corresponding to the current postion of the buffers
+  private int bufferIndex;
+  
+  // The offset of the current data residing in the buffers w.r.t the start
+  // of chunk data
+  private long bufferOffset;
+  
+  // The number of bytes of chunk data residing in the buffers currently
+  private long bufferLength;
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247245)
Time Spent: 2h  (was: 1h 50m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247247=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247247
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 02:32
Start Date: 23/May/19 02:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#issuecomment-495044511
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 72 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 93 | Maven dependency ordering for branch |
   | +1 | mvninstall | 726 | trunk passed |
   | +1 | compile | 321 | trunk passed |
   | +1 | checkstyle | 91 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1016 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 304 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 500 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 98 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 64 | hadoop-ozone in the patch failed. |
   | -1 | compile | 58 | hadoop-hdds in the patch failed. |
   | -1 | compile | 45 | hadoop-ozone in the patch failed. |
   | -1 | javac | 58 | hadoop-hdds in the patch failed. |
   | -1 | javac | 45 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 35 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 5 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 696 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 142 | the patch passed |
   | -1 | findbugs | 109 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 83 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 184 | hadoop-hdds in the patch failed. |
   | -1 | unit | 55 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 4805 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/804 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8317136f19fc 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9c61494 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247244=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247244
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 02:32
Start Date: 23/May/19 02:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r286755505
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
 ##
 @@ -43,467 +41,334 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
-import java.util.concurrent.ExecutionException;
 
 /**
  * An {@link InputStream} used by the REST service in combination with the
  * SCMClient to read the value of a key from a sequence
  * of container chunks.  All bytes of the key value are stored in container
- * chunks.  Each chunk may contain multiple underlying {@link ByteBuffer}
+ * chunks. Each chunk may contain multiple underlying {@link ByteBuffer}
  * instances.  This class encapsulates all state management for iterating
- * through the sequence of chunks and the sequence of buffers within each 
chunk.
+ * through the sequence of chunks through {@link ChunkInputStream}.
  */
 public class BlockInputStream extends InputStream implements Seekable {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(BlockInputStream.class);
+
   private static final int EOF = -1;
 
   private final BlockID blockID;
+  private final long length;
+  private Pipeline pipeline;
+  private final long containerKey;
+  private final Token token;
+  private final boolean verifyChecksum;
   private final String traceID;
   private XceiverClientManager xceiverClientManager;
   private XceiverClientSpi xceiverClient;
-  private List chunks;
-  // ChunkIndex points to the index current chunk in the buffers or the the
-  // index of chunk which will be read next into the buffers in
-  // readChunkFromContainer().
+  private boolean initialized = false;
+
+  // List of ChunkInputStreams, one for each chunk in the block
+  private List chunkStreams;
+
+  // chunkOffsets[i] stores the index of the first data byte in
+  // chunkStream i w.r.t the block data.
+  // Let’s say we have chunk size as 40 bytes. And let's say the parent
+  // block stores data from index 200 and has length 400.
+  // The first 40 bytes of this block will be stored in chunk[0], next 40 in
+  // chunk[1] and so on. But since the chunkOffsets are w.r.t the block only
+  // and not the key, the values in chunkOffsets will be [0, 40, 80,].
+  private long[] chunkOffsets = null;
+
+  // Index of the chunkStream corresponding to the current postion of the
+  // BlockInputStream i.e offset of the data to be read next from this block
   private int chunkIndex;
-  // ChunkIndexOfCurrentBuffer points to the index of chunk read into the
-  // buffers or index of the last chunk in the buffers. It is updated only
-  // when a new chunk is read from container into the buffers.
-  private int chunkIndexOfCurrentBuffer;
-  private long[] chunkOffset;
-  private List buffers;
-  private int bufferIndex;
-  private long bufferPosition;
-  private final boolean verifyChecksum;
 
-  /**
-   * Creates a new BlockInputStream.
-   *
-   * @param blockID block ID of the chunk
-   * @param xceiverClientManager client manager that controls client
-   * @param xceiverClient client to perform container calls
-   * @param chunks list of chunks to read
-   * @param traceID container protocol call traceID
-   * @param verifyChecksum verify checksum
-   * @param initialPosition the initial position of the stream pointer. This
-   *position is seeked now if the up-stream was seeked
-   *before this was created.
-   */
-  public BlockInputStream(
-  BlockID blockID, XceiverClientManager xceiverClientManager,
-  XceiverClientSpi xceiverClient, List chunks, String traceID,
-  boolean verifyChecksum, long initialPosition) throws IOException {
-this.blockID = blockID;
-this.traceID = traceID;
-this.xceiverClientManager = xceiverClientManager;
-this.xceiverClient = xceiverClient;
-this.chunks = chunks;
-this.chunkIndex = 0;
-this.chunkIndexOfCurrentBuffer = -1;
-// chunkOffset[i] stores offset at which chunk i stores data in
-// BlockInputStream
-this.chunkOffset = new long[this.chunks.size()];
-initializeChunkOffset();
-this.buffers = null;
-this.bufferIndex = 0;
-this.bufferPosition = -1;
+  // Position of the BlockInputStream is maintainted by this variable till
+  // the stream is initialized. This postion is w.r.t to the block only and
+  // not the key.
+  // For the above 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247243=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247243
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 02:32
Start Date: 23/May/19 02:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r286755517
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -0,0 +1,531 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
+ * encapsulates all state management for iterating through the sequence of
+ * buffers within each chunk.
+ */
+public class ChunkInputStream extends InputStream implements Seekable {
+
+  private final ChunkInfo chunkInfo;
+  private final long length;
+  private final BlockID blockID;
+  private final String traceID;
+  private XceiverClientSpi xceiverClient;
+  private final boolean verifyChecksum;
+  private boolean allocated = false;
+
+  // Buffer to store the chunk data read from the DN container
+  private List buffers;
+
+  // Index of the buffers corresponding to the current postion of the buffers
+  private int bufferIndex;
+  
+  // The offset of the current data residing in the buffers w.r.t the start
+  // of chunk data
+  private long bufferOffset;
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247243)
Time Spent: 1h 40m  (was: 1.5h)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=247237=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247237
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 23/May/19 02:23
Start Date: 23/May/19 02:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-495042774
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 87 | Maven dependency ordering for branch |
   | +1 | mvninstall | 587 | trunk passed |
   | +1 | compile | 281 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 953 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 306 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 512 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 472 | the patch passed |
   | +1 | compile | 277 | the patch passed |
   | +1 | cc | 277 | the patch passed |
   | +1 | javac | 277 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 705 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 74 | hadoop-ozone generated 1 new + 2 unchanged - 0 fixed = 
3 total (was 2) |
   | -1 | findbugs | 214 | hadoop-hdds generated 3 new + 0 unchanged - 0 fixed 
= 3 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 173 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1170 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6511 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Synchronization performed on java.util.concurrent.ConcurrentHashMap in 
org.apache.hadoop.utils.db.cache.PartialTableCache.evictCache(long)  At 
PartialTableCache.java:org.apache.hadoop.utils.db.cache.PartialTableCache.evictCache(long)
  At PartialTableCache.java:[line 90] |
   |  |  Synchronization performed on java.util.concurrent.ConcurrentHashMap in 
org.apache.hadoop.utils.db.cache.PartialTableCache.get(CacheKey)  At 
PartialTableCache.java:org.apache.hadoop.utils.db.cache.PartialTableCache.get(CacheKey)
  At PartialTableCache.java:[line 61] |
   |  |  Synchronization performed on java.util.concurrent.ConcurrentHashMap in 
org.apache.hadoop.utils.db.cache.PartialTableCache.put(CacheKey, CacheValue)  
At 
PartialTableCache.java:org.apache.hadoop.utils.db.cache.PartialTableCache.put(CacheKey,
 CacheValue)  At PartialTableCache.java:[line 68] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.web.client.TestBuckets |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/827 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 9544cfb72bf7 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9c61494 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/8/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/8/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/8/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 

[jira] [Work logged] (HDDS-1584) Fix TestFailureHandlingByClient tests

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1584?focusedWorklogId=247234=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247234
 ]

ASF GitHub Bot logged work on HDDS-1584:


Author: ASF GitHub Bot
Created on: 23/May/19 02:11
Start Date: 23/May/19 02:11
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on issue #845: HDDS-1584. Fix 
TestFailureHandlingByClient tests
URL: https://github.com/apache/hadoop/pull/845#issuecomment-495040496
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247234)
Time Spent: 0.5h  (was: 20m)

> Fix TestFailureHandlingByClient tests
> -
>
> Key: HDDS-1584
> URL: https://issues.apache.org/jira/browse/HDDS-1584
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846341#comment-16846341
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
1s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m  
9s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
5s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
10s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
11s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 14s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}327m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
|   | hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServices |
|   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
|   

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247205=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247205
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 01:13
Start Date: 23/May/19 01:13
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#issuecomment-495030313
 
 
   Introduced ChunkInputStream and separated the chunk reads from 
BlockInputStream.
   This is an initial patch. I am working on unit tests for this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247205)
Time Spent: 1.5h  (was: 1h 20m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=247200=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247200
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 23/May/19 00:41
Start Date: 23/May/19 00:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-495024889
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 527 | trunk passed |
   | +1 | compile | 268 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 939 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 335 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 525 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 485 | the patch passed |
   | +1 | compile | 283 | the patch passed |
   | +1 | cc | 283 | the patch passed |
   | +1 | javac | 283 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 738 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 89 | hadoop-ozone generated 5 new + 2 unchanged - 0 fixed = 
7 total (was 2) |
   | +1 | findbugs | 517 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 159 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1139 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 9471 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/827 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux d9ebc5853ef6 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9c61494 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/6/testReport/ |
   | Max. process+thread count | 4868 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the 

[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-22 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846318#comment-16846318
 ] 

Íñigo Goiri commented on HDFS-13955:


+1 on  [^HDFS-13955-HDFS-13891.003.patch].
I'll wait a little for others to take a look.

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1551:
-
Description: 
Implement Bucket write requests to use OM Cache, double buffer.

And also in OM previously we used to Ratis client for communication to Ratis 
server, instead of that use Ratis server API's.

 
 # Implement checkAcl's method with new Request classes. As in Grpc Context, we 
shall not have UGI object, we need to set userName and remotehostAddress during 
pre-Execute step, and use this information to construct UGI and InetAddress and 
then call checkAcl.
 # Implement takeSnapshot once after flush to OM DB is completed.

 

In this Jira will add the changes to implement bucket operations, and HA/Non-HA 
will have a different code path, but once all requests are implemented will 
have a single code path.

  was:
Implement Bucket write requests to use OM Cache, double buffer.

And also in OM previously we used to Ratis client for communication to Ratis 
server, instead of that use Ratis server API's.

 

Implement checkAcl's method with new Request classes. As in Grpc Context we 
shall not have UGI object, we need to set userName and remotehostAddress during 
pre-Execute step, and use these information to construct UGI and InetAddress 
and then call checkAcl.

 

In this Jira will add the changes to implement bucket operations, and HA/Non-HA 
will have a different code path, but once all requests are implemented will 
have a single code path.


> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
>  # Implement checkAcl's method with new Request classes. As in Grpc Context, 
> we shall not have UGI object, we need to set userName and remotehostAddress 
> during pre-Execute step, and use this information to construct UGI and 
> InetAddress and then call checkAcl.
>  # Implement takeSnapshot once after flush to OM DB is completed.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247195=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247195
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 00:34
Start Date: 23/May/19 00:34
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#issuecomment-495023686
 
 
   Test failures are not related to this patch
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247195)
Time Spent: 7h 20m  (was: 7h 10m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846314#comment-16846314
 ] 

Hadoop QA commented on HDFS-13955:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
22s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m 
17s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13955 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969439/HDFS-13955-HDFS-13891.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ecdac77fe996 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26819/testReport/ |
| Max. process+thread count | 1046 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26819/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Support secure Namenode in NamenodeHeartbeatService
> 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846289#comment-16846289
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m 
42s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
6s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 24m 
32s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m 
56s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
57s{color} | {color:red} fault-injection-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
51s{color} | {color:red} network-tests in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 20m 
12s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 20m 12s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  4m 
44s{color} | {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
11s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
11s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  7m 
59s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 28s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Updated] (HDFS-14494) Move Server logging of StatedId inside receiveRequestState()

2019-05-22 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14494:
--
Attachment: HDFS-14494.001.patch

> Move Server logging of StatedId inside receiveRequestState()
> 
>
> Key: HDFS-14494
> URL: https://issues.apache.org/jira/browse/HDFS-14494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Konstantin Shvachko
>Assignee: Shweta
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-14494.001.patch
>
>
> HDFS-14270 introduced logging of the client and server StateIds in trace 
> level. Unfortunately one of the arguments 
> {{alignmentContext.getLastSeenStateId()}} holds a lock on FSEdits, which is 
> called even if trace logging level is disabled. I propose to move logging 
> message inside {{GlobalStateIdContext.receiveRequestState()}} where 
> {{clientStateId}} and {{serverStateId}} already calculated and can be easily 
> printed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1580) Obtain Handler reference in ContainerScrubber

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1580?focusedWorklogId=247103=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247103
 ]

ASF GitHub Bot logged work on HDDS-1580:


Author: ASF GitHub Bot
Created on: 22/May/19 22:25
Start Date: 22/May/19 22:25
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on issue #842: HDDS-1580.Obtain 
Handler reference in ContainerScrubber
URL: https://github.com/apache/hadoop/pull/842#issuecomment-494996929
 
 
   The failed test TestHddsDatanodeService fails locally on trunk. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247103)
Time Spent: 40m  (was: 0.5h)

> Obtain Handler reference in ContainerScrubber
> -
>
> Key: HDDS-1580
> URL: https://issues.apache.org/jira/browse/HDDS-1580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Obtain reference to Handler based on containerType in scrub() in 
> ContainerScrubber.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247096=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247096
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 22/May/19 22:21
Start Date: 22/May/19 22:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#issuecomment-494995987
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247096)
Time Spent: 7h 10m  (was: 7h)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-22 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13955:
---
Attachment: HDFS-13955-HDFS-13891.003.patch

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-22 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846254#comment-16846254
 ] 

Íñigo Goiri commented on HDFS-13955:


Thanks [~crh], this looks pretty much it.
Just a couple minor style comments:
* Space after if in TestRouterNamenodeMonitoring#309.
* Use capital JMX in FederationUtil#77.

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=247080=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247080
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 22/May/19 22:01
Start Date: 22/May/19 22:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-494990794
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 46 | Maven dependency ordering for branch |
   | +1 | mvninstall | 562 | trunk passed |
   | +1 | compile | 276 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 855 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | trunk passed |
   | 0 | spotbugs | 290 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 478 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 551 | the patch passed |
   | +1 | compile | 269 | the patch passed |
   | +1 | cc | 269 | the patch passed |
   | +1 | javac | 269 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 632 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 83 | hadoop-ozone generated 4 new + 2 unchanged - 0 fixed = 
6 total (was 2) |
   | +1 | findbugs | 499 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 163 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1409 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 9664 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/827 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 7f4c042b294b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9c61494 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/5/testReport/ |
   | Max. process+thread count | 5344 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247080)
Time Spent: 1h 50m  (was: 1h 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop 

[jira] [Commented] (HDDS-1487) Bootstrap React framework for Recon UI

2019-05-22 Thread Vivek Ratnavel Subramanian (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846244#comment-16846244
 ] 

Vivek Ratnavel Subramanian commented on HDDS-1487:
--

Thanks Marton! I have created https://issues.apache.org/jira/browse/HDDS-1585 
to track this effort.

> Bootstrap React framework for Recon UI
> --
>
> Key: HDDS-1487
> URL: https://issues.apache.org/jira/browse/HDDS-1487
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Bootstrap React with Typescript, Ant, LESS and other necessary libraries for 
> Recon UI. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-05-22 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1585:


 Summary: Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
 Key: HDDS-1585
 URL: https://issues.apache.org/jira/browse/HDDS-1585
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Recon
Affects Versions: 0.4.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1551:
-
Description: 
Implement Bucket write requests to use OM Cache, double buffer.

And also in OM previously we used to Ratis client for communication to Ratis 
server, instead of that use Ratis server API's.

 

Implement checkAcl's method with new Request classes. As in Grpc Context we 
shall not have UGI object, we need to set userName and remotehostAddress during 
pre-Execute step, and use these information to construct UGI and InetAddress 
and then call checkAcl.

 

In this Jira will add the changes to implement bucket operations, and HA/Non-HA 
will have a different code path, but once all requests are implemented will 
have a single code path.

  was:
Implement Bucket write requests to use OM Cache, double buffer.

And also in OM previously we used to Ratis client for communication to Ratis 
server, instead of that use Ratis server API's.

 

In this Jira will add the changes to implement bucket operations, and HA/Non-HA 
will have a different code path, but once all requests are implemented will 
have a single code path.


> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> Implement checkAcl's method with new Request classes. As in Grpc Context we 
> shall not have UGI object, we need to set userName and remotehostAddress 
> during pre-Execute step, and use these information to construct UGI and 
> InetAddress and then call checkAcl.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1580) Obtain Handler reference in ContainerScrubber

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1580?focusedWorklogId=247026=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247026
 ]

ASF GitHub Bot logged work on HDDS-1580:


Author: ASF GitHub Bot
Created on: 22/May/19 21:23
Start Date: 22/May/19 21:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #842: HDDS-1580.Obtain 
Handler reference in ContainerScrubber
URL: https://github.com/apache/hadoop/pull/842#issuecomment-494978772
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 543 | trunk passed |
   | +1 | compile | 283 | trunk passed |
   | +1 | checkstyle | 89 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 899 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 283 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 471 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 503 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | +1 | checkstyle | 92 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | the patch passed |
   | +1 | findbugs | 497 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 170 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1491 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 6628 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-842/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1ca7a6a7d663 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9c61494 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-842/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-842/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-842/2/testReport/ |
   | Max. process+thread count | 5360 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-842/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247026)
Time Spent: 0.5h  (was: 20m)

> Obtain Handler reference in ContainerScrubber
> -
>
> Key: HDDS-1580
> URL: https://issues.apache.org/jira/browse/HDDS-1580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
> 

[jira] [Commented] (HDFS-14500) NameNode StartupProgress continues to report edit log segments after the LOADING_EDITS phase is finished

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846231#comment-16846231
 ] 

Hadoop QA commented on HDFS-14500:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 56 unchanged - 8 fixed = 56 total (was 64) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14500 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969420/HDFS-14500.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 213826e2f869 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a315913 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26818/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26818/testReport/ |
| Max. process+thread count | 2985 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Work logged] (HDDS-1584) Fix TestFailureHandlingByClient tests

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1584?focusedWorklogId=247013=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247013
 ]

ASF GitHub Bot logged work on HDDS-1584:


Author: ASF GitHub Bot
Created on: 22/May/19 21:07
Start Date: 22/May/19 21:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #845: HDDS-1584. Fix 
TestFailureHandlingByClient tests
URL: https://github.com/apache/hadoop/pull/845#issuecomment-494973644
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 555 | trunk passed |
   | +1 | compile | 265 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 954 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 292 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 484 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 508 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 746 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 527 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 176 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1311 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6644 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-845/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/845 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e8791484682e 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9c61494 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-845/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-845/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-845/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-845/1/testReport/ |
   | Max. process+thread count | 3729 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-845/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247013)
Time Spent: 20m  (was: 10m)

> Fix TestFailureHandlingByClient tests
> -
>
> Key: HDDS-1584
> URL: https://issues.apache.org/jira/browse/HDDS-1584
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> 

[jira] [Updated] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1458:

Attachment: HDDS-1458.012.patch

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846198#comment-16846198
 ] 

Eric Yang edited comment on HDDS-1458 at 5/22/19 8:00 PM:
--

Patch 012 address 1, 2, 4, 5, 6, and 7 in [~elek]'s [previous 
comment|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16845291=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16845291]
 with assumption that removing ozoneblockade compose files from dist tarball.


was (Author: eyang):
Patch 012 address 1, 2, 4, 5, 6, and 7 in [~elek]'s [previous 
comment|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16845291=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16845291]
 with assumption that removing ozoneblockade compose files.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=246967=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246967
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 22/May/19 20:00
Start Date: 22/May/19 20:00
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #819:  HDDS-1501 : Create a 
Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-494943784
 
 
   +1 with conflicts resolved.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246967)
Time Spent: 4h 20m  (was: 4h 10m)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846198#comment-16846198
 ] 

Eric Yang commented on HDDS-1458:
-

Patch 012 address 1, 2, 4, 5, 6, and 7 in [~elek]'s [previous 
comment|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16845291=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16845291]
 with assumption that removing ozoneblockade compose files.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=246962=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246962
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 22/May/19 19:56
Start Date: 22/May/19 19:56
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #819:  HDDS-1501 : Create a 
Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-494940308
 
 
   Hi @avijayanhwx there is a merge conflict in RDBStore.java. Can you take a 
look?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246962)
Time Spent: 4h 10m  (was: 4h)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1458:

Attachment: HDDS-1458.011.patch

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1065) OM and DN should persist SCM certificate as the trust root

2019-05-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846183#comment-16846183
 ] 

Hudson commented on HDDS-1065:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16587 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16587/])
HDDS-1065. OM and DN should persist SCM certificate as the trust root. (xyao: 
rev 9c61494c02ee5fc27841a0d82959a8a2acc18a4e)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
* (edit) hadoop-hdds/common/src/main/proto/SCMSecurityProtocol.proto
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/CertificateClientTestImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/CertificateClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java


> OM and DN should persist SCM certificate as the trust root
> --
>
> Key: HDDS-1065
> URL: https://issues.apache.org/jira/browse/HDDS-1065
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> OM and DN should persist SCM certificate as the trust root.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1065) OM and DN should persist SCM certificate as the trust root

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1065?focusedWorklogId=246940=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246940
 ]

ASF GitHub Bot logged work on HDDS-1065:


Author: ASF GitHub Bot
Created on: 22/May/19 19:24
Start Date: 22/May/19 19:24
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #834: HDDS-1065. OM 
and DN should persist SCM certificate as the trust root. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/834
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246940)
Time Spent: 3h 10m  (was: 3h)

> OM and DN should persist SCM certificate as the trust root
> --
>
> Key: HDDS-1065
> URL: https://issues.apache.org/jira/browse/HDDS-1065
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> OM and DN should persist SCM certificate as the trust root.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1065) OM and DN should persist SCM certificate as the trust root

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1065?focusedWorklogId=246939=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246939
 ]

ASF GitHub Bot logged work on HDDS-1065:


Author: ASF GitHub Bot
Created on: 22/May/19 19:24
Start Date: 22/May/19 19:24
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #834: HDDS-1065. OM and DN 
should persist SCM certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/834#issuecomment-494919523
 
 
   +1. Thanks @ajayydv for the contribution. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246939)
Time Spent: 3h  (was: 2h 50m)

> OM and DN should persist SCM certificate as the trust root
> --
>
> Key: HDDS-1065
> URL: https://issues.apache.org/jira/browse/HDDS-1065
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> OM and DN should persist SCM certificate as the trust root.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1582) Fix BindException due to address already in use in unit tests

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1582?focusedWorklogId=246926=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246926
 ]

ASF GitHub Bot logged work on HDDS-1582:


Author: ASF GitHub Bot
Created on: 22/May/19 19:17
Start Date: 22/May/19 19:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #844: HDDS-1582. Fix 
BindException due to address already in use in unit tests. Contributed by Mukul 
Kumar Singh.
URL: https://github.com/apache/hadoop/pull/844#issuecomment-494914823
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 62 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 605 | trunk passed |
   | +1 | compile | 286 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 867 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 310 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 513 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 513 | the patch passed |
   | +1 | compile | 272 | the patch passed |
   | +1 | javac | 272 | the patch passed |
   | -0 | checkstyle | 35 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 93 | patch has no errors when building and testing our 
client artifacts. |
   | +1 | javadoc | 145 | the patch passed |
   | +1 | findbugs | 533 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 181 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1863 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 9128 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.om.TestScmSafeMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-844/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/844 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 980db0d9713b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a315913 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-844/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-844/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-844/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-844/1/testReport/ |
   | Max. process+thread count | 5058 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-844/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246926)
Time Spent: 0.5h  (was: 20m)

> Fix BindException due to address already in use in unit tests
> -
>
> Key: HDDS-1582
> URL: https://issues.apache.org/jira/browse/HDDS-1582
> Project: Hadoop 

[jira] [Updated] (HDDS-1584) Fix TestFailureHandlingByClient tests

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1584:
-
Labels: pull-request-available  (was: )

> Fix TestFailureHandlingByClient tests
> -
>
> Key: HDDS-1584
> URL: https://issues.apache.org/jira/browse/HDDS-1584
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1584) Fix TestFailureHandlingByClient tests

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1584?focusedWorklogId=246916=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246916
 ]

ASF GitHub Bot logged work on HDDS-1584:


Author: ASF GitHub Bot
Created on: 22/May/19 19:15
Start Date: 22/May/19 19:15
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #845: HDDS-1584. 
Fix TestFailureHandlingByClient tests
URL: https://github.com/apache/hadoop/pull/845
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246916)
Time Spent: 10m
Remaining Estimate: 0h

> Fix TestFailureHandlingByClient tests
> -
>
> Key: HDDS-1584
> URL: https://issues.apache.org/jira/browse/HDDS-1584
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14487) Missing Space in Client Error Message

2019-05-22 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846165#comment-16846165
 ] 

Shweta commented on HDFS-14487:
---

Uploaded patch, please review. Jenkins needs to be triggered too.

> Missing Space in Client Error Message
> -
>
> Key: HDFS-14487
> URL: https://issues.apache.org/jira/browse/HDFS-14487
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HDFS-14487.001.patch
>
>
> {code:java}
>   if (retries == 0) {
> throw new IOException("Unable to close file because the last 
> block"
> + last + " does not have enough number of replicas.");
>   }
> {code}
> Note the missing space after "last block".
> https://github.com/apache/hadoop/blob/f940ab242da80a22bae95509d5c282d7e2f7ecdb/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L968-L969



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14487) Missing Space in Client Error Message

2019-05-22 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDFS-14487:
-

Assignee: Shweta

> Missing Space in Client Error Message
> -
>
> Key: HDFS-14487
> URL: https://issues.apache.org/jira/browse/HDFS-14487
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HDFS-14487.001.patch
>
>
> {code:java}
>   if (retries == 0) {
> throw new IOException("Unable to close file because the last 
> block"
> + last + " does not have enough number of replicas.");
>   }
> {code}
> Note the missing space after "last block".
> https://github.com/apache/hadoop/blob/f940ab242da80a22bae95509d5c282d7e2f7ecdb/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L968-L969



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1580) Obtain Handler reference in ContainerScrubber

2019-05-22 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846156#comment-16846156
 ] 

Shweta commented on HDDS-1580:
--

Pull request is available.

[~hgadre], [~arpaga] can you please review and suggest if any changes are 
needed?

> Obtain Handler reference in ContainerScrubber
> -
>
> Key: HDDS-1580
> URL: https://issues.apache.org/jira/browse/HDDS-1580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Obtain reference to Handler based on containerType in scrub() in 
> ContainerScrubber.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1065) OM and DN should persist SCM certificate as the trust root

2019-05-22 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1065:
-
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution and all for the reviews. I've commit the 
patch to trunk. 

> OM and DN should persist SCM certificate as the trust root
> --
>
> Key: HDDS-1065
> URL: https://issues.apache.org/jira/browse/HDDS-1065
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> OM and DN should persist SCM certificate as the trust root.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-05-22 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846144#comment-16846144
 ] 

Erik Krogen commented on HDFS-12979:


Regarding the use of the static variable to pass the tests, it seems kind of 
messy, and requires modification in a lot of places. Even if we go with this 
approach, can we perhaps add it to {{MiniDFSCluster}} or something to avoid 
having to do this step in multiple tests?

That being said, I wonder if we can set some values of 
{{DFS_NAMENODE_CHECKPOINT_PERIOD_KEY}} and/or 
{{DFS_NAMENODE_CHECKPOINT_TXNS_KEY}} within {{MiniDFSCluster}} to achieve the 
same thing without the use of a test-only variable?

Some other smaller comments:
* You have a typo, {{addresString}} instead of {{addressString}}
* I think we can remove the TODO regarding interrupted vs. IO exception 
handling?
* Regarding testing, it seems we have a test for the standby being able to 
upload to both active and observer. But I think we also need to test for the 
case of the Active NN rejecting the request?

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch, HDFS-12979.007.patch, HDFS-12979.008.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846131#comment-16846131
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] {quote}Can you please delete the original one from the 
compose/ozoneblockade dir of the dirst tar.{quote}

Sorry, I am confused about this ask.  compose/ozoneblockade dir is already 
removed from dist project.  Do you want me to remove compose/ozoneblockade from 
tarball too?  If yes, this implies that python script will default to use 
$OZONE_HOME/compose/ozone as source for yaml files for testing purpose.  If no, 
please clarify.

For clarity on locating docker-compose file:
{code}
if "MAVEN_TEST" in os.environ:
  compose_dir = environ.get("MAVEN_TEST")
  FILE = os.path.join(compose_dir, "docker-compose.yaml")
elif "OZONE_HOME" in os.environ:
  compose_dir = environ.get("OZONE_HOME")
  FILE = os.path.join(compose_dir, "compose", "ozone", \
 "docker-compose.yaml")
else:
  compose_dir = os.getcwd()
  FILE = os.path.join(compose_dir, "compose", "ozone", \
 "docker-compose.yaml")
{code}

# MAVEN_TEST is self explaintory, to located docker compose in maven build 
directory.
# OZONE_HOME is used, if binary executable is symlink to a location such as 
/usr/bin.  Where PATH variable can find the script via symlink location, but 
OZONE_HOME variable is required to assist locating compose file.
# If system environment is not configured, there is no OZONE_HOME variable 
defined, then look for compose file from the current location.  This style is 
used for expanding tarball, and run directly.

Hope this explains the three type of differences that the script is improved to 
support.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14500) NameNode StartupProgress continues to report edit log segments after the LOADING_EDITS phase is finished

2019-05-22 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14500:
---
Attachment: HDFS-14500.001.patch

> NameNode StartupProgress continues to report edit log segments after the 
> LOADING_EDITS phase is finished
> 
>
> Key: HDFS-14500
> URL: https://issues.apache.org/jira/browse/HDFS-14500
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 2.8.5, 3.1.2
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14500.000.patch, HDFS-14500.001.patch
>
>
> When testing out a cluster with the edit log tailing fast path feature 
> enabled (HDFS-13150), an unrelated issue caused the NameNode to remain in 
> safe mode for an extended period of time, preventing the NameNode from fully 
> completing its startup sequence. We noticed that the Startup Progress web UI 
> displayed many edit log segments (millions of them).
> I traced this problem back to {{StartupProgress}}. Within 
> {{FSEditLogLoader}}, the loader continually tries to update the startup 
> progress with a new {{Step}} any time that it loads edits. Per the Javadoc 
> for {{StartupProgress}}, this should be a no-op once startup is completed:
> {code:title=StartupProgress.java}
>  * After startup completes, the tracked data is frozen.  Any subsequent 
> updates
>  * or counter increments are no-ops.
> {code}
> However, {{StartupProgress}} only implements that logic once the _entire_ 
> startup sequence has been completed. When {{FSEditLogLoader}} calls 
> {{addStep()}}, it adds it into the {{LOADING_EDITS}} phase:
> {code:title=FSEditLogLoader.java}
> StartupProgress prog = NameNode.getStartupProgress();
> Step step = createStartupProgressStep(edits);
> prog.beginStep(Phase.LOADING_EDITS, step);
> {code}
> This phase, in our case, ended long before, so it is nonsensical to continue 
> to add steps to it. I believe it is a bug that {{StartupProgress}} accepts 
> such steps instead of ignoring them; once a phase is complete, it should no 
> longer change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1584) Fix TestFailureHandlingByClient tests

2019-05-22 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1584:
--
Status: Patch Available  (was: Open)

> Fix TestFailureHandlingByClient tests
> -
>
> Key: HDDS-1584
> URL: https://issues.apache.org/jira/browse/HDDS-1584
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14500) NameNode StartupProgress continues to report edit log segments after the LOADING_EDITS phase is finished

2019-05-22 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846127#comment-16846127
 ] 

Erik Krogen commented on HDFS-14500:


Yes, your understanding is correct. Considering that the changes are in 
{{StartupProgress}} and every phase in {{StartupProgress}} is treated 
identically, I don't think it matters which phase we use in the test. I used 
{{LOADING_FSIMAGE}} since that is the phase which the test was already 
modifying. Let me know if you still think we should change it.

Uploading v001 patch to fix whitespace and test failure.

> NameNode StartupProgress continues to report edit log segments after the 
> LOADING_EDITS phase is finished
> 
>
> Key: HDFS-14500
> URL: https://issues.apache.org/jira/browse/HDFS-14500
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 2.8.5, 3.1.2
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14500.000.patch
>
>
> When testing out a cluster with the edit log tailing fast path feature 
> enabled (HDFS-13150), an unrelated issue caused the NameNode to remain in 
> safe mode for an extended period of time, preventing the NameNode from fully 
> completing its startup sequence. We noticed that the Startup Progress web UI 
> displayed many edit log segments (millions of them).
> I traced this problem back to {{StartupProgress}}. Within 
> {{FSEditLogLoader}}, the loader continually tries to update the startup 
> progress with a new {{Step}} any time that it loads edits. Per the Javadoc 
> for {{StartupProgress}}, this should be a no-op once startup is completed:
> {code:title=StartupProgress.java}
>  * After startup completes, the tracked data is frozen.  Any subsequent 
> updates
>  * or counter increments are no-ops.
> {code}
> However, {{StartupProgress}} only implements that logic once the _entire_ 
> startup sequence has been completed. When {{FSEditLogLoader}} calls 
> {{addStep()}}, it adds it into the {{LOADING_EDITS}} phase:
> {code:title=FSEditLogLoader.java}
> StartupProgress prog = NameNode.getStartupProgress();
> Step step = createStartupProgressStep(edits);
> prog.beginStep(Phase.LOADING_EDITS, step);
> {code}
> This phase, in our case, ended long before, so it is nonsensical to continue 
> to add steps to it. I believe it is a bug that {{StartupProgress}} accepts 
> such steps instead of ignoring them; once a phase is complete, it should no 
> longer change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1584) Fix TestFailureHandlingByClient tests

2019-05-22 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1584:
-

 Summary: Fix TestFailureHandlingByClient tests
 Key: HDDS-1584
 URL: https://issues.apache.org/jira/browse/HDDS-1584
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.4.1
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14475) RBF: Expose router security enabled status on the UI

2019-05-22 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846105#comment-16846105
 ] 

CR Hota commented on HDFS-14475:


[~brahmareddy] [~ayushtkn] Thanks for the previous review.

Incorporated the nit changes in 002 patch. For refactoring/clean-up created 
HDFS-14508

> RBF: Expose router security enabled status on the UI
> 
>
> Key: HDFS-14475
> URL: https://issues.apache.org/jira/browse/HDFS-14475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14475-HDFS-13891.001.patch, 
> HDFS-14475-HDFS-13891.002.patch
>
>
> This is a branched off Jira to expose metric so that router's security status 
> can be displayed on the UI. We are still unclear if more work needs to be 
> done for dealing with CORS etc. 
> https://issues.apache.org/jira/browse/HDFS-12510 will continue to track that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-22 Thread CR Hota (JIRA)
CR Hota created HDFS-14508:
--

 Summary: RBF: Clean-up and refactor UI components
 Key: HDFS-14508
 URL: https://issues.apache.org/jira/browse/HDFS-14508
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: CR Hota


Router UI has tags that are not used or incorrectly set. The code should be 
cleaned-up.

One such example is 

Path : 
(\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
{code:java}
{"name": "routerstat", "url": 
"/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14475) RBF: Expose router security enabled status on the UI

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846099#comment-16846099
 ] 

Hadoop QA commented on HDFS-14475:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
23s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14475 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969412/HDFS-14475-HDFS-13891.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b588754d121b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26817/testReport/ |
| Max. process+thread count | 1026 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26817/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Expose router security enabled status on the UI
> 
>
>   

[jira] [Assigned] (HDDS-1557) Datanode exits because Ratis fails to shutdown ratis server

2019-05-22 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey reassigned HDDS-1557:
--

Assignee: Aravindan Vijayan

> Datanode exits because Ratis fails to shutdown ratis server 
> 
>
> Key: HDDS-1557
> URL: https://issues.apache.org/jira/browse/HDDS-1557
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> Datanode exits because Ratis fails to shutdown ratis server 
> {code}
> 2019-05-19 12:07:19,276 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(965)) - 
> 80747533-f47c-43de-85b8-e70db448c63f: inconsistency entries. 
> Reply:99930d0a-72ab-4795-a3ac-f3c
> fb61ca1bb<-80747533-f47c-43de-85b8-e70db448c63f#3132:FAIL,INCONSISTENCY,nextIndex:9057,term:33,followerCommit:9057
> 2019-05-19 12:07:19,276 WARN  impl.RaftServerProxy 
> (RaftServerProxy.java:lambda$close$4(320)) - 
> e143b976-ab35-4555-a800-7f05a2b1b738: Failed to close GRPC server
> java.io.InterruptedIOException: e143b976-ab35-4555-a800-7f05a2b1b738: 
> shutdown server with port 64605 failed
> at 
> org.apache.ratis.util.IOUtils.toInterruptedIOException(IOUtils.java:48)
> at 
> org.apache.ratis.grpc.server.GrpcService.closeImpl(GrpcService.java:160)
> at 
> org.apache.ratis.server.impl.RaftServerRpcWithProxy.lambda$close$2(RaftServerRpcWithProxy.java:76)
> at 
> org.apache.ratis.util.LifeCycle.lambda$checkStateAndClose$2(LifeCycle.java:231)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:251)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:229)
> at 
> org.apache.ratis.server.impl.RaftServerRpcWithProxy.close(RaftServerRpcWithProxy.java:76)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$close$4(RaftServerProxy.java:318)
> at 
> org.apache.ratis.util.LifeCycle.lambda$checkStateAndClose$2(LifeCycle.java:231)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:251)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:229)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.close(RaftServerProxy.java:313)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.stop(XceiverServerRatis.java:432)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.stop(OzoneContainer.java:201)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.close(DatanodeStateMachine.java:270)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.stopDaemon(DatanodeStateMachine.java:394)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.stop(HddsDatanodeService.java:449)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.terminateDatanode(HddsDatanodeService.java:429)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:208)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:349)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.InterruptedException
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl.awaitTermination(ServerImpl.java:282)
> at 
> org.apache.ratis.grpc.server.GrpcService.closeImpl(GrpcService.java:158)
> ... 19 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846089#comment-16846089
 ] 

Eric Yang commented on HDDS-1458:
-

Moved start-build-env.sh improvement to HADOOP issue.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845508#comment-16845508
 ] 

Eric Yang edited comment on HDDS-1458 at 5/22/19 5:39 PM:
--

{quote}Can you please delete the original one from the compose/ozoneblockade 
dir of the dirst tar.{quote}

Sure.

{quote}Can you please remove the -dev compose file? It can be confusing and 
this is required only during the maven builds (In the current form the 
dist-layout-stitching lines are duplicated. Fix me If I am wrong but the 
compose files are copied twice{quote}

The two compose files are structured to handle in maven and in tarball 
environment independently.  With the recent changes to docker image build 
process, we can remove -dev compose file.

{quote}Please use path which is relative to the py file instead of 
getcwd:{quote}

In maven environment, it will try to use docker compose file in 
${basedir}/target because it is localized with additional properties like 
version number.  The script path does not tell us where localized version of 
docker compose file is relative to the script.  This is the reason that the 
script computes the current working directory for maven environment.

{quote}tests prefix is missing from README.md L38{quote}

Will check.

{quote} I didn't have time yet to understood ./start-build-env.sh changes. But 
it may be better to do in a HADOOP jira to get more visibility. If I understood 
well it's not directly required but it's a generic improvements to make it 
possible to run the ozone tests in the build docker containers. What do you 
think?{quote}

Ok, then we will need that patch check in before this one for maven verify to 
work, opened HADOOP-16325 for this part of improvement.

{quote}Can you please help me to understand why do we need to set OZONE_HOME in 
hadoop-ozone/fault-injection-test/network-tests/pom.xml. The MAVEN_HOME seems 
to be enough.{quote}

That is to make the script more proper, in the event that we want to run the 
python script, and the docker compose files are not placed in the same location 
as python script, it gives us a frame of reference to locate the required 
files.  I made a habit of using _HOME variable to look up relative path, in 
case if we decided to move compose file to share/ozone/compose, it will be 
easier to make reference change without depend on python script relative 
location.

{quote} Not clear the expected relationship between the network-tests and dist. 
Can we add a provided dependency between them to make it explicit?{quote}

They are independent of each other, there is no need to make explicit 
dependencies.  If we can do some more improvement to the docker compose files, 
we might be able to do parallel build to speed up the time that it takes to run 
fault injection tests: mvn -T 4 clean verify -Pit



was (Author: eyang):
{quote}Can you please delete the original one from the compose/ozoneblockade 
dir of the dirst tar.{quote}

Sure.

{quote}Can you please remove the -dev compose file? It can be confusing and 
this is required only during the maven builds (In the current form the 
dist-layout-stitching lines are duplicated. Fix me If I am wrong but the 
compose files are copied twice{quote}

The two compose files are structured to handle in maven and in tarball 
environment independently.  With the recent changes to docker image build 
process, we can remove -dev compose file.

{quote}Please use path which is relative to the py file instead of 
getcwd:{quote}

In maven environment, it will try to use docker compose file in 
${basedir}/target because it is localized with additional properties like 
version number.  The script path does not tell us where localized version of 
docker compose file is relative to the script.  This is the reason that the 
script computes the current working directory for maven environment.

{quote}tests prefix is missing from README.md L38{quote}

Will check.

{quote} I didn't have time yet to understood ./start-build-env.sh changes. But 
it may be better to do in a HADOOP jira to get more visibility. If I understood 
well it's not directly required but it's a generic improvements to make it 
possible to run the ozone tests in the build docker containers. What do you 
think?{quote}

Ok, then we will need that patch check in before this one for maven verify to 
work.

{quote}Can you please help me to understand why do we need to set OZONE_HOME in 
hadoop-ozone/fault-injection-test/network-tests/pom.xml. The MAVEN_HOME seems 
to be enough.{quote}

That is to make the script more proper, in the event that we want to run the 
python script, and the docker compose files are not placed in the same location 
as python script, it gives us a frame of reference to locate the required 
files.  I made a habit of using _HOME variable to look up relative path, in 
case if we decided to move compose file to 

[jira] [Updated] (HDFS-14475) RBF: Expose router security enabled status on the UI

2019-05-22 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14475:
---
Attachment: HDFS-14475-HDFS-13891.002.patch

> RBF: Expose router security enabled status on the UI
> 
>
> Key: HDFS-14475
> URL: https://issues.apache.org/jira/browse/HDFS-14475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14475-HDFS-13891.001.patch, 
> HDFS-14475-HDFS-13891.002.patch
>
>
> This is a branched off Jira to expose metric so that router's security status 
> can be displayed on the UI. We are still unclear if more work needs to be 
> done for dealing with CORS etc. 
> https://issues.apache.org/jira/browse/HDFS-12510 will continue to track that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1583) EOFException for Ozone RPC client

2019-05-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846029#comment-16846029
 ] 

Eric Yang commented on HDDS-1583:
-

I found that retries 30 retires over a period of one minute seems to work in 
most general cases.  If I recall correctly HBase has a client backoff setting 
to throttle the retries.  Maybe similar logic can be implemented here.

> EOFException for Ozone RPC client
> -
>
> Key: HDDS-1583
> URL: https://issues.apache.org/jira/browse/HDDS-1583
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> We discover a bug in Ozone RPC client.  If the server is in starting state, 
> and not completely started.  Calling new SCMCLI().createScmClient(); would 
> result in EOFException error.  Most software client have some level of 
> retires to establish connection without throwing errors for a brief period of 
> time to ensure that transient errors are not over alarming to client code.  
> The experience can be improved by making sure that connection logic retries a 
> few times before giving up.  See related stack trace:
> {code}java.io.EOFException: End of File Exception between local host is: 
> "localhost.localdomain/127.0.0.1"; destination host is: 
> "localhost.localdomain":9860; : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:789)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy12.inSafeMode(Unknown Source)
> at 
> org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.inSafeMode(StorageContainerLocationProtocolClientSideTranslatorPB.java:383)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
> at com.sun.proxy.$Proxy13.inSafeMode(Unknown Source)
> at 
> org.apache.hadoop.hdds.scm.client.ContainerOperationClient.inSafeMode(ContainerOperationClient.java:456)
> at 
> org.apache.hadoop.ozone.ITDiskReadWrite.setUp(ITDiskReadWrite.java:43)
> at junit.framework.TestCase.runBare(TestCase.java:139)
> at junit.framework.TestResult$1.protect(TestResult.java:122)
> at junit.framework.TestResult.runProtected(TestResult.java:142)
> at junit.framework.TestResult.run(TestResult.java:125)
> at junit.framework.TestCase.run(TestCase.java:129)
> at junit.framework.TestSuite.runTest(TestSuite.java:255)
> at junit.framework.TestSuite.run(TestSuite.java:250)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
> at org.junit.runners.Suite.runChild(Suite.java:127)
> at org.junit.runners.Suite.runChild(Suite.java:26)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> at 
> 

[jira] [Work logged] (HDDS-1582) Fix BindException due to address already in use in unit tests

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1582?focusedWorklogId=246821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246821
 ]

ASF GitHub Bot logged work on HDDS-1582:


Author: ASF GitHub Bot
Created on: 22/May/19 16:16
Start Date: 22/May/19 16:16
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #844: HDDS-1582. 
Fix BindException due to address already in use in unit tests. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/844
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246821)
Time Spent: 20m  (was: 10m)

> Fix BindException due to address already in use in unit tests
> -
>
> Key: HDDS-1582
> URL: https://issues.apache.org/jira/browse/HDDS-1582
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This bug fixes the issues seen in HDDS-1384 & HDDS-1282, where unit tests are 
> timing out because of BindException.
> The fix is to use Socket.bind in place of server sockets. The biggest 
> difference is that ServerSocket will do accept and listen after binding to 
> the socket and this will keep the sockets in TIME_WAIT state after close. 
> Please refer, 
> https://docs.oracle.com/javase/tutorial/networking/sockets/definition.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1583) EOFException for Ozone RPC client

2019-05-22 Thread Eric Yang (JIRA)
Eric Yang created HDDS-1583:
---

 Summary: EOFException for Ozone RPC client
 Key: HDDS-1583
 URL: https://issues.apache.org/jira/browse/HDDS-1583
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Eric Yang


We discover a bug in Ozone RPC client.  If the server is in starting state, and 
not completely started.  Calling new SCMCLI().createScmClient(); would result 
in EOFException error.  Most software client have some level of retires to 
establish connection without throwing errors for a brief period of time to 
ensure that transient errors are not over alarming to client code.  The 
experience can be improved by making sure that connection logic retries a few 
times before giving up.  See related stack trace:

{code}java.io.EOFException: End of File Exception between local host is: 
"localhost.localdomain/127.0.0.1"; destination host is: 
"localhost.localdomain":9860; : java.io.EOFException; For more details see:  
http://wiki.apache.org/hadoop/EOFException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:789)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy12.inSafeMode(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.inSafeMode(StorageContainerLocationProtocolClientSideTranslatorPB.java:383)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
at com.sun.proxy.$Proxy13.inSafeMode(Unknown Source)
at 
org.apache.hadoop.hdds.scm.client.ContainerOperationClient.inSafeMode(ContainerOperationClient.java:456)
at 
org.apache.hadoop.ozone.ITDiskReadWrite.setUp(ITDiskReadWrite.java:43)
at junit.framework.TestCase.runBare(TestCase.java:139)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at 
org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 

[jira] [Updated] (HDDS-4) Implement security for Hadoop Distributed Storage Layer

2019-05-22 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-4:
--
Fix Version/s: 0.4.0

> Implement security for Hadoop Distributed Storage Layer 
> 
>
> Key: HDDS-4
> URL: https://issues.apache.org/jira/browse/HDDS-4
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HadoopStorageLayerSecurity.pdf
>
>
> In HDFS-7240, we have created a scalable block layer that facilitates 
> separation of namespace and block layer.  Hadoop Distributed Storage Layer 
> (HDSL) allows us to scale HDFS(HDFS-10419) and as well as create ozone 
> (HDFS-13074).
> This JIRA is an umbrella JIRA that tracks the security-related work items for 
> Hadoop Distributed Storage Layer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4) Implement security for Hadoop Distributed Storage Layer

2019-05-22 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-4.
---
Resolution: Fixed

Resolved as all subtasks are completed and merged.

> Implement security for Hadoop Distributed Storage Layer 
> 
>
> Key: HDDS-4
> URL: https://issues.apache.org/jira/browse/HDDS-4
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HadoopStorageLayerSecurity.pdf
>
>
> In HDFS-7240, we have created a scalable block layer that facilitates 
> separation of namespace and block layer.  Hadoop Distributed Storage Layer 
> (HDSL) allows us to scale HDFS(HDFS-10419) and as well as create ozone 
> (HDFS-13074).
> This JIRA is an umbrella JIRA that tracks the security-related work items for 
> Hadoop Distributed Storage Layer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1582) Fix BindException due to address already in use in unit tests

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1582:
-
Labels: pull-request-available  (was: )

> Fix BindException due to address already in use in unit tests
> -
>
> Key: HDDS-1582
> URL: https://issues.apache.org/jira/browse/HDDS-1582
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
>  Labels: pull-request-available
>
> This bug fixes the issues seen in HDDS-1384 & HDDS-1282, where unit tests are 
> timing out because of BindException.
> The fix is to use Socket.bind in place of server sockets. The biggest 
> difference is that ServerSocket will do accept and listen after binding to 
> the socket and this will keep the sockets in TIME_WAIT state after close. 
> Please refer, 
> https://docs.oracle.com/javase/tutorial/networking/sockets/definition.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1582) Fix BindException due to address already in use in unit tests

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1582?focusedWorklogId=246817=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246817
 ]

ASF GitHub Bot logged work on HDDS-1582:


Author: ASF GitHub Bot
Created on: 22/May/19 15:59
Start Date: 22/May/19 15:59
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #844: HDDS-1582. 
Fix BindException due to address already in use in unit tests. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/844
 
 
   The fix is to use Socket.bind in place of server sockets. The biggest 
difference is that ServerSocket will do accept and listen after binding to the 
socket and this will keep the sockets in TIME_WAIT state after close. Please 
refer, 
https://docs.oracle.com/javase/tutorial/networking/sockets/definition.html
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246817)
Time Spent: 10m
Remaining Estimate: 0h

> Fix BindException due to address already in use in unit tests
> -
>
> Key: HDDS-1582
> URL: https://issues.apache.org/jira/browse/HDDS-1582
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This bug fixes the issues seen in HDDS-1384 & HDDS-1282, where unit tests are 
> timing out because of BindException.
> The fix is to use Socket.bind in place of server sockets. The biggest 
> difference is that ServerSocket will do accept and listen after binding to 
> the socket and this will keep the sockets in TIME_WAIT state after close. 
> Please refer, 
> https://docs.oracle.com/javase/tutorial/networking/sockets/definition.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1582) Fix BindException due to address already in use in unit tests

2019-05-22 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1582:
---

 Summary: Fix BindException due to address already in use in unit 
tests
 Key: HDDS-1582
 URL: https://issues.apache.org/jira/browse/HDDS-1582
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.3.0
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


This bug fixes the issues seen in HDDS-1384 & HDDS-1282, where unit tests are 
timing out because of BindException.

The fix is to use Socket.bind in place of server sockets. The biggest 
difference is that ServerSocket will do accept and listen after binding to the 
socket and this will keep the sockets in TIME_WAIT state after close. Please 
refer, 
https://docs.oracle.com/javase/tutorial/networking/sockets/definition.html






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1565) Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push

2019-05-22 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton resolved HDDS-1565.

Resolution: Fixed

> Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push
> 
>
> Key: HDDS-1565
> URL: https://issues.apache.org/jira/browse/HDDS-1565
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Based on the feedback from [~eyang] I realized that the names of the k8s-dev 
> and k8s-dev-push profiles are not expressive enough as the created containers 
> can be used not only for kubernetes but can be used together with any other 
> container orchestrator.
> I propose to rename them to docker-build/docker-push.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1502) Add metrics for Ozone Ratis performance

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1502?focusedWorklogId=246711=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246711
 ]

ASF GitHub Bot logged work on HDDS-1502:


Author: ASF GitHub Bot
Created on: 22/May/19 12:54
Start Date: 22/May/19 12:54
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #833: HDDS-1502. 
Add metrics for Ozone Ratis performance.
URL: https://github.com/apache/hadoop/pull/833#discussion_r286473537
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
 ##
 @@ -86,7 +107,7 @@ public long getNumWriteStateMachineOps() {
 
   @VisibleForTesting
   public long getNumReadStateMachineOps() {
-return numReadStateMachineOps.value();
+return queryStateMachineOps.value();
 
 Review comment:
   Addressed
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246711)
Time Spent: 1h 10m  (was: 1h)

> Add metrics for Ozone Ratis performance
> ---
>
> Key: HDDS-1502
> URL: https://issues.apache.org/jira/browse/HDDS-1502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This jira will add some metrics for Ratis pipeline performance
> 1) number of bytes written
> 2) number Read state Machine calls
> 3) no of Read StateMachine Fails



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1502) Add metrics for Ozone Ratis performance

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1502?focusedWorklogId=246710=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246710
 ]

ASF GitHub Bot logged work on HDDS-1502:


Author: ASF GitHub Bot
Created on: 22/May/19 12:54
Start Date: 22/May/19 12:54
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #833: HDDS-1502. 
Add metrics for Ozone Ratis performance.
URL: https://github.com/apache/hadoop/pull/833#discussion_r286473403
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
 ##
 @@ -24,6 +24,7 @@
 import org.apache.hadoop.metrics2.annotation.Metrics;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+import org.apache.hadoop.metrics2.lib.MutableRate;
 
 Review comment:
   Addressed in the next commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246710)
Time Spent: 1h  (was: 50m)

> Add metrics for Ozone Ratis performance
> ---
>
> Key: HDDS-1502
> URL: https://issues.apache.org/jira/browse/HDDS-1502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jira will add some metrics for Ratis pipeline performance
> 1) number of bytes written
> 2) number Read state Machine calls
> 3) no of Read StateMachine Fails



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1517) AllocateBlock call fails with ContainerNotFoundException

2019-05-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845825#comment-16845825
 ] 

Hudson commented on HDDS-1517:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16586 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16586/])
HDDS-1517. AllocateBlock call fails with ContainerNotFoundException 
(shashikant: rev a315913c48f475a31065de48a441c7faae89ab15)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java


> AllocateBlock call fails with ContainerNotFoundException
> 
>
> Key: HDDS-1517
> URL: https://issues.apache.org/jira/browse/HDDS-1517
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1517.000.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In allocateContainer call,  the container is first added to pipelineStateMap 
> and then added to container cache. If two allocate blocks execute 
> concurrently, it might happen that one find the container to exist in the 
> pipelineStateMap but the container is yet to be updated in the container 
> cache, hence failing with CONTAINER_NOT_FOUND exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1578) Add libstdc++ to ozone build docker image

2019-05-22 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1578:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to the ozone-build branch (with trivial typo fixed).

Thanks [~vivekratnavel] the contribution.

> Add libstdc++ to ozone build docker image
> -
>
> Key: HDDS-1578
> URL: https://issues.apache.org/jira/browse/HDDS-1578
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1578.v1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> libstdc++ is required for node install in alpine builds. Otherwise we get 
> this error:
> {code:java}
> [ERROR] node: error while loading shared libraries: libstdc++.so.6: cannot 
> open shared object file: No such file or directory{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1441) Remove usage of getRetryFailureException

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1441?focusedWorklogId=246692=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246692
 ]

ASF GitHub Bot logged work on HDDS-1441:


Author: ASF GitHub Bot
Created on: 22/May/19 12:04
Start Date: 22/May/19 12:04
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #745: HDDS-1441. Remove 
usage of getRetryFailureException. (swagle)
URL: https://github.com/apache/hadoop/pull/745#issuecomment-494773928
 
 
   @swagle , i think the ratis snapshot version is incorrect as it leads to 
compilation failure. There are some more critical issues in Ratis which need to 
e addressed before we create a new snapshot and update the same in Ozone. Let's 
wait for the required fixes to go in Ratis and then update here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246692)
Time Spent: 3h 20m  (was: 3h 10m)

> Remove usage of getRetryFailureException
> 
>
> Key: HDDS-1441
> URL: https://issues.apache.org/jira/browse/HDDS-1441
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Per [~szetszwo]'s comment on RATIS-518, we can remove the usage of 
> getRetryFailureException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1449) JVM Exit in datanode while committing a key

2019-05-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845814#comment-16845814
 ] 

Hudson commented on HDDS-1449:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16585 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16585/])
HDDS-1449. JVM Exit in datanode while committing a key. Contributed by 
(31469764+bshashikant: rev 2fc6f8599a64bceb19e789c55012ddc42ba590bf)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ContainerCache.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/BlockUtils.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueBlockIterator.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerCheck.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueBlockIterator.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java


> JVM Exit in datanode while committing a key
> ---
>
> Key: HDDS-1449
> URL: https://issues.apache.org/jira/browse/HDDS-1449
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.4.1
>
> Attachments: 2019-04-22--20-23-56-IST.MiniOzoneChaosCluster.log, 
> hs_err_pid67466.log
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Saw the following trace in MiniOzoneChaosCluster run.
> {code}
> C  [librocksdbjni17271331491728127.jnilib+0x9755c]  
> Java_org_rocksdb_RocksDB_write0+0x1c
> J 13917  org.rocksdb.RocksDB.write0(JJJ)V (0 bytes) @ 0x0001102ff62e 
> [0x0001102ff580+0xae]
> J 17167 C2 
> org.apache.hadoop.utils.RocksDBStore.writeBatch(Lorg/apache/hadoop/utils/BatchOperation;)V
>  (260 bytes) @ 0x000111bbd01c [0x000111bbcde0+0x23c]
> J 20434 C1 
> org.apache.hadoop.ozone.container.keyvalue.impl.BlockManagerImpl.putBlock(Lorg/apache/hadoop/ozone/container/common/interfaces/Container;Lorg/apache/hadoop/ozone/container/common/helpers/BlockData;)J
>  (261 bytes) @ 0x000111c267ac [0x000111c25640+0x116c]
> J 19262 C2 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (866 bytes) @ 0x0001125c5aa0 [0x0001125c1560+0x4540]
> J 15095 C2 
> 

[jira] [Updated] (HDDS-1449) JVM Exit in datanode while committing a key

2019-05-22 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1449:
--
Fix Version/s: (was: 0.5.0)
   0.4.1

> JVM Exit in datanode while committing a key
> ---
>
> Key: HDDS-1449
> URL: https://issues.apache.org/jira/browse/HDDS-1449
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.4.1
>
> Attachments: 2019-04-22--20-23-56-IST.MiniOzoneChaosCluster.log, 
> hs_err_pid67466.log
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Saw the following trace in MiniOzoneChaosCluster run.
> {code}
> C  [librocksdbjni17271331491728127.jnilib+0x9755c]  
> Java_org_rocksdb_RocksDB_write0+0x1c
> J 13917  org.rocksdb.RocksDB.write0(JJJ)V (0 bytes) @ 0x0001102ff62e 
> [0x0001102ff580+0xae]
> J 17167 C2 
> org.apache.hadoop.utils.RocksDBStore.writeBatch(Lorg/apache/hadoop/utils/BatchOperation;)V
>  (260 bytes) @ 0x000111bbd01c [0x000111bbcde0+0x23c]
> J 20434 C1 
> org.apache.hadoop.ozone.container.keyvalue.impl.BlockManagerImpl.putBlock(Lorg/apache/hadoop/ozone/container/common/interfaces/Container;Lorg/apache/hadoop/ozone/container/common/helpers/BlockData;)J
>  (261 bytes) @ 0x000111c267ac [0x000111c25640+0x116c]
> J 19262 C2 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (866 bytes) @ 0x0001125c5aa0 [0x0001125c1560+0x4540]
> J 15095 C2 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (142 bytes) @ 0x000110ffc940 [0x000110ffc0c0+0x880]
> J 19301 C2 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (146 bytes) @ 0x000111396144 [0x000111395e60+0x2e4]
> J 15997 C2 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$$Lambda$776.get()Ljava/lang/Object;
>  (16 bytes) @ 0x000110138e54 [0x000110138d80+0xd4]
> J 15970 C2 java.util.concurrent.CompletableFuture$AsyncSupply.run()V (61 
> bytes) @ 0x00010fc80094 [0x00010fc8+0x94]
> J 17368 C2 
> java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V
>  (225 bytes) @ 0x000110b0a7a0 [0x000110b0a5a0+0x200]
> J 7389 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes) @ 
> 0x00011012a004 [0x000110129f00+0x104]
> J 6837 C1 java.lang.Thread.run()V (17 bytes) @ 0x00011002b144 
> [0x00011002b000+0x144]
> v  ~StubRoutines::call_stub
> V  [libjvm.dylib+0x2ef1f6]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x6ae
> V  [libjvm.dylib+0x2ef99a]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
> Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x164
> V  [libjvm.dylib+0x2efb46]  JavaCalls::call_virtual(JavaValue*, Handle, 
> KlassHandle, Symbol*, Symbol*, Thread*)+0x4a
> V  [libjvm.dylib+0x34a46d]  thread_entry(JavaThread*, Thread*)+0x7c
> V  [libjvm.dylib+0x56eb0f]  JavaThread::thread_main_inner()+0x9b
> V  [libjvm.dylib+0x57020a]  JavaThread::run()+0x1c2
> V  [libjvm.dylib+0x48d4a6]  java_start(Thread*)+0xf6
> C  [libsystem_pthread.dylib+0x3305]  _pthread_body+0x7e
> C  [libsystem_pthread.dylib+0x626f]  _pthread_start+0x46
> C  [libsystem_pthread.dylib+0x2415]  thread_start+0xd
> C  0x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1517) AllocateBlock call fails with ContainerNotFoundException

2019-05-22 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845808#comment-16845808
 ] 

Shashikant Banerjee commented on HDDS-1517:
---

Thanks [~jnp] for the review. I have committed this change to trunk.

> AllocateBlock call fails with ContainerNotFoundException
> 
>
> Key: HDDS-1517
> URL: https://issues.apache.org/jira/browse/HDDS-1517
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1517.000.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In allocateContainer call,  the container is first added to pipelineStateMap 
> and then added to container cache. If two allocate blocks execute 
> concurrently, it might happen that one find the container to exist in the 
> pipelineStateMap but the container is yet to be updated in the container 
> cache, hence failing with CONTAINER_NOT_FOUND exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1517) AllocateBlock call fails with ContainerNotFoundException

2019-05-22 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1517:
--
  Resolution: Fixed
Target Version/s: 0.4.1  (was: 0.5.0)
  Status: Resolved  (was: Patch Available)

> AllocateBlock call fails with ContainerNotFoundException
> 
>
> Key: HDDS-1517
> URL: https://issues.apache.org/jira/browse/HDDS-1517
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1517.000.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In allocateContainer call,  the container is first added to pipelineStateMap 
> and then added to container cache. If two allocate blocks execute 
> concurrently, it might happen that one find the container to exist in the 
> pipelineStateMap but the container is yet to be updated in the container 
> cache, hence failing with CONTAINER_NOT_FOUND exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1517) AllocateBlock call fails with ContainerNotFoundException

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1517?focusedWorklogId=246685=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246685
 ]

ASF GitHub Bot logged work on HDDS-1517:


Author: ASF GitHub Bot
Created on: 22/May/19 11:55
Start Date: 22/May/19 11:55
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #826: HDDS-1517. 
AllocateBlock call fails with ContainerNotFoundException.
URL: https://github.com/apache/hadoop/pull/826
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246685)
Time Spent: 1h 40m  (was: 1.5h)

> AllocateBlock call fails with ContainerNotFoundException
> 
>
> Key: HDDS-1517
> URL: https://issues.apache.org/jira/browse/HDDS-1517
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1517.000.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In allocateContainer call,  the container is first added to pipelineStateMap 
> and then added to container cache. If two allocate blocks execute 
> concurrently, it might happen that one find the container to exist in the 
> pipelineStateMap but the container is yet to be updated in the container 
> cache, hence failing with CONTAINER_NOT_FOUND exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1449) JVM Exit in datanode while committing a key

2019-05-22 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1449:
--
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thanks [~msingh] for working in this. I have committed this change to trunk.

> JVM Exit in datanode while committing a key
> ---
>
> Key: HDDS-1449
> URL: https://issues.apache.org/jira/browse/HDDS-1449
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
> Attachments: 2019-04-22--20-23-56-IST.MiniOzoneChaosCluster.log, 
> hs_err_pid67466.log
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Saw the following trace in MiniOzoneChaosCluster run.
> {code}
> C  [librocksdbjni17271331491728127.jnilib+0x9755c]  
> Java_org_rocksdb_RocksDB_write0+0x1c
> J 13917  org.rocksdb.RocksDB.write0(JJJ)V (0 bytes) @ 0x0001102ff62e 
> [0x0001102ff580+0xae]
> J 17167 C2 
> org.apache.hadoop.utils.RocksDBStore.writeBatch(Lorg/apache/hadoop/utils/BatchOperation;)V
>  (260 bytes) @ 0x000111bbd01c [0x000111bbcde0+0x23c]
> J 20434 C1 
> org.apache.hadoop.ozone.container.keyvalue.impl.BlockManagerImpl.putBlock(Lorg/apache/hadoop/ozone/container/common/interfaces/Container;Lorg/apache/hadoop/ozone/container/common/helpers/BlockData;)J
>  (261 bytes) @ 0x000111c267ac [0x000111c25640+0x116c]
> J 19262 C2 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (866 bytes) @ 0x0001125c5aa0 [0x0001125c1560+0x4540]
> J 15095 C2 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (142 bytes) @ 0x000110ffc940 [0x000110ffc0c0+0x880]
> J 19301 C2 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (146 bytes) @ 0x000111396144 [0x000111395e60+0x2e4]
> J 15997 C2 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$$Lambda$776.get()Ljava/lang/Object;
>  (16 bytes) @ 0x000110138e54 [0x000110138d80+0xd4]
> J 15970 C2 java.util.concurrent.CompletableFuture$AsyncSupply.run()V (61 
> bytes) @ 0x00010fc80094 [0x00010fc8+0x94]
> J 17368 C2 
> java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V
>  (225 bytes) @ 0x000110b0a7a0 [0x000110b0a5a0+0x200]
> J 7389 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes) @ 
> 0x00011012a004 [0x000110129f00+0x104]
> J 6837 C1 java.lang.Thread.run()V (17 bytes) @ 0x00011002b144 
> [0x00011002b000+0x144]
> v  ~StubRoutines::call_stub
> V  [libjvm.dylib+0x2ef1f6]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x6ae
> V  [libjvm.dylib+0x2ef99a]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
> Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x164
> V  [libjvm.dylib+0x2efb46]  JavaCalls::call_virtual(JavaValue*, Handle, 
> KlassHandle, Symbol*, Symbol*, Thread*)+0x4a
> V  [libjvm.dylib+0x34a46d]  thread_entry(JavaThread*, Thread*)+0x7c
> V  [libjvm.dylib+0x56eb0f]  JavaThread::thread_main_inner()+0x9b
> V  [libjvm.dylib+0x57020a]  JavaThread::run()+0x1c2
> V  [libjvm.dylib+0x48d4a6]  java_start(Thread*)+0xf6
> C  [libsystem_pthread.dylib+0x3305]  _pthread_body+0x7e
> C  [libsystem_pthread.dylib+0x626f]  _pthread_start+0x46
> C  [libsystem_pthread.dylib+0x2415]  thread_start+0xd
> C  0x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1578) Add libstdc++ to ozone build docker image

2019-05-22 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845800#comment-16845800
 ] 

Elek, Marton commented on HDDS-1578:


+1 Thanks the update [~vivekratnavel]. I updated the ozone-build branch in the  
hadoop-docker-ozone repository. This branch (ozone-build) will be used by 
apache/ozone-build containers.

But it's not yet requested. As a temporary workaround I updated my temporary 
image to apply the changes for the ci.anzix.net.

> Add libstdc++ to ozone build docker image
> -
>
> Key: HDDS-1578
> URL: https://issues.apache.org/jira/browse/HDDS-1578
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1578.v1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> libstdc++ is required for node install in alpine builds. Otherwise we get 
> this error:
> {code:java}
> [ERROR] node: error while loading shared libraries: libstdc++.so.6: cannot 
> open shared object file: No such file or directory{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1449) JVM Exit in datanode while committing a key

2019-05-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1449?focusedWorklogId=246681=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-246681
 ]

ASF GitHub Bot logged work on HDDS-1449:


Author: ASF GitHub Bot
Created on: 22/May/19 11:48
Start Date: 22/May/19 11:48
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #825: HDDS-1449. 
JVM Exit in datanode while committing a key. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/825
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 246681)
Time Spent: 1h 10m  (was: 1h)

> JVM Exit in datanode while committing a key
> ---
>
> Key: HDDS-1449
> URL: https://issues.apache.org/jira/browse/HDDS-1449
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Attachments: 2019-04-22--20-23-56-IST.MiniOzoneChaosCluster.log, 
> hs_err_pid67466.log
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Saw the following trace in MiniOzoneChaosCluster run.
> {code}
> C  [librocksdbjni17271331491728127.jnilib+0x9755c]  
> Java_org_rocksdb_RocksDB_write0+0x1c
> J 13917  org.rocksdb.RocksDB.write0(JJJ)V (0 bytes) @ 0x0001102ff62e 
> [0x0001102ff580+0xae]
> J 17167 C2 
> org.apache.hadoop.utils.RocksDBStore.writeBatch(Lorg/apache/hadoop/utils/BatchOperation;)V
>  (260 bytes) @ 0x000111bbd01c [0x000111bbcde0+0x23c]
> J 20434 C1 
> org.apache.hadoop.ozone.container.keyvalue.impl.BlockManagerImpl.putBlock(Lorg/apache/hadoop/ozone/container/common/interfaces/Container;Lorg/apache/hadoop/ozone/container/common/helpers/BlockData;)J
>  (261 bytes) @ 0x000111c267ac [0x000111c25640+0x116c]
> J 19262 C2 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (866 bytes) @ 0x0001125c5aa0 [0x0001125c1560+0x4540]
> J 15095 C2 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (142 bytes) @ 0x000110ffc940 [0x000110ffc0c0+0x880]
> J 19301 C2 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandRequestProto;Lorg/apache/hadoop/ozone/container/common/transport/server/ratis/DispatcherContext;)Lorg/apache/hadoop/hdds/protocol/datanode/proto/ContainerProtos$ContainerCommandResponseProto;
>  (146 bytes) @ 0x000111396144 [0x000111395e60+0x2e4]
> J 15997 C2 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$$Lambda$776.get()Ljava/lang/Object;
>  (16 bytes) @ 0x000110138e54 [0x000110138d80+0xd4]
> J 15970 C2 java.util.concurrent.CompletableFuture$AsyncSupply.run()V (61 
> bytes) @ 0x00010fc80094 [0x00010fc8+0x94]
> J 17368 C2 
> java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V
>  (225 bytes) @ 0x000110b0a7a0 [0x000110b0a5a0+0x200]
> J 7389 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes) @ 
> 0x00011012a004 [0x000110129f00+0x104]
> J 6837 C1 java.lang.Thread.run()V (17 bytes) @ 0x00011002b144 
> [0x00011002b000+0x144]
> v  ~StubRoutines::call_stub
> V  [libjvm.dylib+0x2ef1f6]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x6ae
> V  [libjvm.dylib+0x2ef99a]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
> Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x164
> V  [libjvm.dylib+0x2efb46]  JavaCalls::call_virtual(JavaValue*, Handle, 
> KlassHandle, Symbol*, Symbol*, Thread*)+0x4a
> V  [libjvm.dylib+0x34a46d]  thread_entry(JavaThread*, Thread*)+0x7c
> V  [libjvm.dylib+0x56eb0f]  

[jira] [Comment Edited] (HDFS-12914) Block report leases cause missing blocks until next report

2019-05-22 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845764#comment-16845764
 ] 

He Xiaoqiao edited comment on HDFS-12914 at 5/22/19 10:48 AM:
--

[~smarella], some minor comments about  [^HDFS-12914-trunk.01.patch] ,
a. we need to check if #context is null when check lease;
b. maybe we should catch #UnregisteredNodeException and return 
{{RegisterCommand.REGISTER}} also;
c. {{datanodeManager.getDatanode(nodeId)}} is possible to return null, so we 
should check {{null}} before pass as one parameter of 
BlockReportLeaseManager#checkLease;
d. it is better to add some unit test as [~jojochuang] and [~starphin] 
mentioned above.


was (Author: hexiaoqiao):
[~smarella], some minor comments about  [^HDFS-12914-trunk.01.patch] ,
a. we need to check if #context is null when check lease;
b. maybe we should catch #UnregisteredNodeException and return 
{{RegisterCommand.REGISTER}} also;
c. {{datanodeManager.getDatanode(nodeId)}} is possible to return null, so we 
should check {{null}} before pass as one parameter of 
BlockReportLeaseManager#checkLease;
d. it is better to add some unit test as [~starphin] mentioned above.

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-05-22 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845764#comment-16845764
 ] 

He Xiaoqiao commented on HDFS-12914:


[~smarella], some minor comments about  [^HDFS-12914-trunk.01.patch] ,
a. we need to check if #context is null when check lease;
b. maybe we should catch #UnregisteredNodeException and return 
{{RegisterCommand.REGISTER}} also;
c. {{datanodeManager.getDatanode(nodeId)}} is possible to return null, so we 
should check {{null}} before pass as one parameter of 
BlockReportLeaseManager#checkLease;
d. it is better to add some unit test as [~starphin] mentioned above.

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14492) Snapshot memory leak

2019-05-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845759#comment-16845759
 ] 

Wei-Chiu Chuang edited comment on HDFS-14492 at 5/22/19 10:39 AM:
--

Work in progress: 
https://github.com/jojochuang/hadoop-common/commit/7b753be2c6a2227300cc612cf08861af7427adef

Without the fix, the fsimage that I mentioned occupies 130.9GB after deleting 
all snapshots. After the fix, the heap reduces to 100.6GB after deleting all 
snapshots.

But still I can see around 10 million FileWithSnapshotFeature and FileDiffList 
lingering in the heap. If I checkpoint and restart, the NN uses just 87.7GB 
heap (all FileWithSnapshotFeature and FileDiffList are gone after restart). Of 
course there are some runtime stuff that got cleaned after restart, but there 
are quite a few more GBs of heap waiting to be optimized.


was (Author: jojochuang):
Work in progress: 
https://github.com/jojochuang/hadoop-common/commit/7b753be2c6a2227300cc612cf08861af7427adef

Without the fix, the fsimage that I mentioned occupies 130.9GB after deleting 
all snapshots. After the fix, the heap reduces to 100.6GB after deleting all 
snapshots.

But still I can see around 10 million FileWithSnapshotFeature and FileDiffList 
lingering in the heap. If I checkpoint and restart, the NN uses just 87.7GB 
heap. Of course there are some runtime stuff that got cleaned after restart, 
but there are quite a few more GBs of heap waiting to be optimized.

> Snapshot memory leak
> 
>
> Key: HDFS-14492
> URL: https://issues.apache.org/jira/browse/HDFS-14492
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.6.0
> Environment: CDH5.14.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> We recently examined the NameNode heap dump of a big, heavy snapshot user, 
> trying to trim some fat, and surely enough we found memory leak in it: when 
> snapshots are removed, the corresponding data structures are not removed.
> This cluster has 586 million file system objects (286 million files, 287 
> million blocks, 13 million directories), using around 132gb of heap.
> While only 44.5 million files have snapshotted copies, 
> (INodeFileAttributes$SnapshotCopy), most inodes (nearly 212 million) have 
> FileWithSnapshotFeature and FileDiffList. Those inodes had snapshotted copies 
> at some point in the past, but after snapshots are removed, those data 
> structured are still kept in the heap.
> INode$Feature = 32.5 byte on average, FileWithSnapshotFeature = 32 bytes, 
> FileDiffList = 24 bytes. It may not sound a lot, but they add up quickly in 
> large clusters like this. In this cluster, a whopping 13.8gb of memory could 
> have been saved:  ((32.5 + 32 + 24) bytes * (211997769 -  44572380) =~ 
> 13.8gb) if not for this bug. That is more than 10% of savings in heap size.
> Heap histogram for reference:
> {noformat}
> num #instances #bytes class name
>  --
>  1: 286418254 27496152384 org.apache.hadoop.hdfs.server.namenode.INodeFile
>  2: 28737 18388622528 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
>  3: 227899550 17144816120 [B
>  4: 287324031 13769408616 
> [Lorg.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
>  5: 71352116 12353841568 [Ljava.lang.Object;
>  6: 286322650 9170335840 
> [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
>  7: 235632329 7658462416 
> [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
>  8: 4 7046430816 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
>  9: 211997769 6783928608 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature
>  10: 211997769 5087946456 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList
>  11: 76586261 3780468856 [I
>  12: 44572380 3209211360 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy
>  13: 58634517 2345380680 java.util.ArrayList
>  14: 44572380 2139474240 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff
>  15: 76582416 1837977984 org.apache.hadoop.hdfs.server.namenode.AclFeature
>  16: 12907668 1135874784 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory{noformat}
> [~szetszwo] [~arpaga] [~smeng] [~shashikant]  any thoughts?
> I am thinking that inside 
> AbstractINodeDiffList#deleteSnapshotDiff() , in addition to cleaning up file 
> diffs, it should also remove FileWithSnapshotFeature. I am not familiar with 
> the snapshot implementation, so any guidance is greatly appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDFS-14492) Snapshot memory leak

2019-05-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845759#comment-16845759
 ] 

Wei-Chiu Chuang commented on HDFS-14492:


Work in progress: 
https://github.com/jojochuang/hadoop-common/commit/7b753be2c6a2227300cc612cf08861af7427adef

Without the fix, the fsimage that I mentioned occupies 130.9GB after deleting 
all snapshots. After the fix, the heap reduces to 100.6GB after deleting all 
snapshots.

But still I can see around 10 million FileWithSnapshotFeature and FileDiffList 
lingering in the heap. If I checkpoint and restart, the NN uses just 87.7GB 
heap. Of course there are some runtime stuff that got cleaned after restart, 
but there are quite a few more GBs of heap waiting to be optimized.

> Snapshot memory leak
> 
>
> Key: HDFS-14492
> URL: https://issues.apache.org/jira/browse/HDFS-14492
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.6.0
> Environment: CDH5.14.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> We recently examined the NameNode heap dump of a big, heavy snapshot user, 
> trying to trim some fat, and surely enough we found memory leak in it: when 
> snapshots are removed, the corresponding data structures are not removed.
> This cluster has 586 million file system objects (286 million files, 287 
> million blocks, 13 million directories), using around 132gb of heap.
> While only 44.5 million files have snapshotted copies, 
> (INodeFileAttributes$SnapshotCopy), most inodes (nearly 212 million) have 
> FileWithSnapshotFeature and FileDiffList. Those inodes had snapshotted copies 
> at some point in the past, but after snapshots are removed, those data 
> structured are still kept in the heap.
> INode$Feature = 32.5 byte on average, FileWithSnapshotFeature = 32 bytes, 
> FileDiffList = 24 bytes. It may not sound a lot, but they add up quickly in 
> large clusters like this. In this cluster, a whopping 13.8gb of memory could 
> have been saved:  ((32.5 + 32 + 24) bytes * (211997769 -  44572380) =~ 
> 13.8gb) if not for this bug. That is more than 10% of savings in heap size.
> Heap histogram for reference:
> {noformat}
> num #instances #bytes class name
>  --
>  1: 286418254 27496152384 org.apache.hadoop.hdfs.server.namenode.INodeFile
>  2: 28737 18388622528 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
>  3: 227899550 17144816120 [B
>  4: 287324031 13769408616 
> [Lorg.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
>  5: 71352116 12353841568 [Ljava.lang.Object;
>  6: 286322650 9170335840 
> [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
>  7: 235632329 7658462416 
> [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
>  8: 4 7046430816 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
>  9: 211997769 6783928608 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature
>  10: 211997769 5087946456 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList
>  11: 76586261 3780468856 [I
>  12: 44572380 3209211360 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy
>  13: 58634517 2345380680 java.util.ArrayList
>  14: 44572380 2139474240 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff
>  15: 76582416 1837977984 org.apache.hadoop.hdfs.server.namenode.AclFeature
>  16: 12907668 1135874784 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory{noformat}
> [~szetszwo] [~arpaga] [~smeng] [~shashikant]  any thoughts?
> I am thinking that inside 
> AbstractINodeDiffList#deleteSnapshotDiff() , in addition to cleaning up file 
> diffs, it should also remove FileWithSnapshotFeature. I am not familiar with 
> the snapshot implementation, so any guidance is greatly appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1112) Add a ozoneFilesystem related api's to OzoneManager to reduce redundant lookups

2019-05-22 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-1112.
-
Resolution: Fixed

Resolving this as all the subtasks have been completed.

> Add a ozoneFilesystem related api's to OzoneManager to reduce redundant 
> lookups
> ---
>
> Key: HDDS-1112
> URL: https://issues.apache.org/jira/browse/HDDS-1112
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
>
> With the current OzoneFilesystem design, most of the lookups while create 
> happens via that getFileStatus api, which inturn does a getKey or a list Key 
> for the keys in the Ozone bucket. 
> In most of the cases, the files do not exists before creation, and hence 
> these lookups corresponds to wasted time in lookup. This jira proposes to 
> optimize the "create" and "getFileState" api in OzoneFileSystem by 
> introducing OzoneFilesystem friendly apis in OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-05-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845746#comment-16845746
 ] 

Wei-Chiu Chuang commented on HDFS-12914:


Thanks for the patch. I think the fix makes sense to me.
Any idea how to test it? A simple unit test (like the testRefreshLeaseId in 
TestBPOfferService added by HDFS-14314)  should be able to verify NN rejects 
expired block report entirely.

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14312) KMS-o-meter: Scale test KMS using kms audit log

2019-05-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845720#comment-16845720
 ] 

Wei-Chiu Chuang commented on HDFS-14312:


Dropped the work the past few weeks. Expect to resume soon.

In the meantime, here's my code: 
https://github.com/jojochuang/hadoop-common/tree/replay_kms_audit
very immature as of now but i'd like to post it in a public github repo before 
I forget.

> KMS-o-meter: Scale test KMS using kms audit log
> ---
>
> Key: HDFS-14312
> URL: https://issues.apache.org/jira/browse/HDFS-14312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> It appears to me that Dynamometer's architecture allows KMS scale tests too.
> I imagine there are two ways to scale test a KMS.
> # Take KMS audit logs, and replay the logs against a KMS.
> # Configure Dynamometer to start KMS in addition to NameNode. Assuming the 
> fsimage comes from an encrypted cluster, replaying HDFS audit log also tests 
> KMS.
> It would be even more interesting to have a tool that converts uncrypted 
> cluster fsimage to an encrypted one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9440) Improve BlockPlacementPolicyDefault's to be able to rebalance when picking of excess replicas

2019-05-22 Thread Fred Peng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845675#comment-16845675
 ] 

Fred Peng commented on HDFS-9440:
-

HI [~xiaochen],do you still work on this?  The fix HDFS-9314  can not handle 
\{SSD(rack 1), DISK(rack 3), DISK(rack 3), DISK(rack 3)}. Thus, the SSD replica 
is always left in cluster. 

> Improve BlockPlacementPolicyDefault's to be able to rebalance when picking of 
> excess replicas
> -
>
> Key: HDFS-9440
> URL: https://issues.apache.org/jira/browse/HDFS-9440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> The test case used in HDFS-9313 and HDFS-9314 identified the limitation of 
> excess replica picking. If the current replicas are on
> {SSD(rack 1), DISK(rack 3), DISK(rack 3), DISK(rack 3)}
> and the storage policy changes to HOT_STORAGE_POLICY_ID, 
> BlockPlacementPolicyDefault's won't be able to delete SSD replica, because 
> deleting the SSD on rack 1 will violate the block placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845646#comment-16845646
 ] 

Hadoop QA commented on HDFS-13955:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
44s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
27s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13955 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969361/HDFS-13955-HDFS-13891.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dfd1f34615da 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26816/testReport/ |
| Max. process+thread count | 960 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26816/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Support secure Namenode in NamenodeHeartbeatService
> 

  1   2   >