[jira] [Commented] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-16 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841921#comment-16841921
 ] 

Jitendra Nath Pandey commented on HDDS-1530:


Cancelled and resubmitted the patch to trigger pre-commit again.

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: HDDS-1530.001.patch, HDDS-1530.002.patch
>
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-16 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1530:
---
Status: Patch Available  (was: Open)

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: HDDS-1530.001.patch, HDDS-1530.002.patch
>
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-16 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1530:
---
Status: Open  (was: Patch Available)

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: HDDS-1530.001.patch, HDDS-1530.002.patch
>
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1517) AllocateBlock call fails with ContainerNotFoundException

2019-05-16 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1517:
---
Status: Patch Available  (was: Open)

> AllocateBlock call fails with ContainerNotFoundException
> 
>
> Key: HDDS-1517
> URL: https://issues.apache.org/jira/browse/HDDS-1517
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1517.000.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In allocateContainer call,  the container is first added to pipelineStateMap 
> and then added to container cache. If two allocate blocks execute 
> concurrently, it might happen that one find the container to exist in the 
> pipelineStateMap but the container is yet to be updated in the container 
> cache, hence failing with CONTAINER_NOT_FOUND exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1517) AllocateBlock call fails with ContainerNotFoundException

2019-05-16 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1517:
---
Status: Open  (was: Patch Available)

> AllocateBlock call fails with ContainerNotFoundException
> 
>
> Key: HDDS-1517
> URL: https://issues.apache.org/jira/browse/HDDS-1517
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1517.000.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In allocateContainer call,  the container is first added to pipelineStateMap 
> and then added to container cache. If two allocate blocks execute 
> concurrently, it might happen that one find the container to exist in the 
> pipelineStateMap but the container is yet to be updated in the container 
> cache, hence failing with CONTAINER_NOT_FOUND exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841907#comment-16841907
 ] 

Hadoop QA commented on HDFS-14303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.9 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} root in branch-2.9 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs in branch-2.9 failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  8s{color} | {color:orange} The patch fails to run checkstyle in hadoop-hdfs 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs in branch-2.9 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs in branch-2.9 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs in branch-2.9 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  7s{color} | {color:orange} The patch fails to run checkstyle in hadoop-hdfs 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
10s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:07598f5 |
| JIRA Issue | HDFS-14303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968978/HDFS-14303-branch-2.9.011.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 032e5ac6f600 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.9 / 33c2059 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.7.0_95 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26800/artifact/out/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26800/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26800/artifact/out//testptch/patchprocess/maven-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26800/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26800/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26800/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |

[jira] [Updated] (HDDS-1550) MiniOzoneCluster is not shutting down all the threads during shutdown.

2019-05-16 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1550:

Summary: MiniOzoneCluster is not shutting down all the threads during 
shutdown.  (was: MiniOzoneChaosCluster is not shutting down all the threads 
during shutdown.)

> MiniOzoneCluster is not shutting down all the threads during shutdown.
> --
>
> Key: HDDS-1550
> URL: https://issues.apache.org/jira/browse/HDDS-1550
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> MiniOzoneCluster does not shutdown all the threads during shutdown. All the 
> threads must be shutdown to close the cluster correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1451) SCMBlockManager findPipeline and createPipeline are not lock protected

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1451?focusedWorklogId=243818=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243818
 ]

ASF GitHub Bot logged work on HDDS-1451:


Author: ASF GitHub Bot
Created on: 17/May/19 04:16
Start Date: 17/May/19 04:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #799: HDDS-1451 : 
SCMBlockManager findPipeline and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#issuecomment-493313241
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 559 | trunk passed |
   | +1 | compile | 252 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1065 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 294 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 529 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 530 | the patch passed |
   | +1 | compile | 253 | the patch passed |
   | +1 | javac | 253 | the patch passed |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 794 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | +1 | findbugs | 552 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 204 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1915 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7326 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/799 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ecd9e7269767 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c183bd8 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/testReport/ |
   | Max. process+thread count | 3676 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243818)
Time Spent: 1h 10m  (was: 1h)

> SCMBlockManager findPipeline and createPipeline are not lock protected
> --
>
> Key: HDDS-1451
> URL: 

[jira] [Commented] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-16 Thread qiang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841892#comment-16841892
 ] 

qiang Liu commented on HDFS-14303:
--

thanks [~hexiaoqiao] for your review, I take your advice and subbmit a new 
patch [^HDFS-14303-branch-2.9.011.patch] target branch-2.9

list the changes below:

a. removed #testLogAppender and another private function, move all logics to 
#testScanDirectoryStructureWarn and adjust some logics.

b. the annotation is fixed in [^HDFS-14303-branch-2.010.patch] but you were 
viewing older version [^HDFS-14303-branch-2.009.patch], anyway, this is fixed 
in the lattest patch [^HDFS-14303-branch-2.9.011.patch] and I adjust some 
comments and add a timeout annotation.

c. patch target to branch-2.9 as your advice

thanks agin for  reviewing.

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12433) Upgrade JUnit from 4 to 5 in hadoop-hdfs security

2019-05-16 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12433:
-
Issue Type: Test  (was: Sub-task)
Parent: (was: HDFS-12254)

> Upgrade JUnit from 4 to 5 in hadoop-hdfs security
> -
>
> Key: HDFS-12433
> URL: https://issues.apache.org/jira/browse/HDFS-12433
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Kei Kori
>Priority: Major
> Attachments: HDFS-12433.001.patch, HDFS-12433.002.patch
>
>
> Upgrade JUnit from 4 to 5 in hadoop-hdfs security  
> (org.apache.hadoop.security)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1550) MiniOzoneChaosCluster is not shutting down all the threads during shutdown.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1550?focusedWorklogId=243816=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243816
 ]

ASF GitHub Bot logged work on HDDS-1550:


Author: ASF GitHub Bot
Created on: 17/May/19 03:49
Start Date: 17/May/19 03:49
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #829: HDDS-1550. 
MiniOzoneChaosCluster is not shutting down all the threads during shutdown. 
Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/829
 
 
   MiniOzoneCluster is not shutting down all the threads during shutdown. this 
patch tries to fix that issue.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243816)
Time Spent: 10m
Remaining Estimate: 0h

> MiniOzoneChaosCluster is not shutting down all the threads during shutdown.
> ---
>
> Key: HDDS-1550
> URL: https://issues.apache.org/jira/browse/HDDS-1550
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> MiniOzoneCluster does not shutdown all the threads during shutdown. All the 
> threads must be shutdown to close the cluster correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1550) MiniOzoneChaosCluster is not shutting down all the threads during shutdown.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1550:
-
Labels: pull-request-available  (was: )

> MiniOzoneChaosCluster is not shutting down all the threads during shutdown.
> ---
>
> Key: HDDS-1550
> URL: https://issues.apache.org/jira/browse/HDDS-1550
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>
> MiniOzoneCluster does not shutdown all the threads during shutdown. All the 
> threads must be shutdown to close the cluster correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12433) Upgrade JUnit from 4 to 5 in hadoop-hdfs security

2019-05-16 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841886#comment-16841886
 ] 

Akira Ajisaka commented on HDFS-12433:
--

Thanks [~kkori] for the patch!
* Are the unit test failures related to the patch? If the answer is yes, you 
need to fix the unit tests.
* Would you fix checkstyle warnings?
* All the change is in hadoop-common project. I'll move this issue from HDFS to 
HADOOP.

Now I am interested in how you created the patch. If you wrote a script for the 
patch, would you share it?


> Upgrade JUnit from 4 to 5 in hadoop-hdfs security
> -
>
> Key: HDFS-12433
> URL: https://issues.apache.org/jira/browse/HDFS-12433
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Kei Kori
>Priority: Major
> Attachments: HDFS-12433.001.patch, HDFS-12433.002.patch
>
>
> Upgrade JUnit from 4 to 5 in hadoop-hdfs security  
> (org.apache.hadoop.security)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1550) MiniOzoneChaosCluster is not shutting down all the threads during shutdown.

2019-05-16 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1550:
---

Assignee: Mukul Kumar Singh

> MiniOzoneChaosCluster is not shutting down all the threads during shutdown.
> ---
>
> Key: HDDS-1550
> URL: https://issues.apache.org/jira/browse/HDDS-1550
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> MiniOzoneCluster does not shutdown all the threads during shutdown. All the 
> threads must be shutdown to close the cluster correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1550) MiniOzoneChaosCluster is not shutting down all the threads during shutdown.

2019-05-16 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1550:

Status: Patch Available  (was: Open)

> MiniOzoneChaosCluster is not shutting down all the threads during shutdown.
> ---
>
> Key: HDDS-1550
> URL: https://issues.apache.org/jira/browse/HDDS-1550
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>
> MiniOzoneCluster does not shutdown all the threads during shutdown. All the 
> threads must be shutdown to close the cluster correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) chek block directory logic not correct when there is only meta file, print no meaning warn log

2019-05-16 Thread qiang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HDFS-14303:
-
Attachment: HDFS-14303-branch-2.9.011.patch

> chek block directory logic not correct when there is only meta file, print no 
> meaning warn log
> --
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Attachments: HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13255) RBF: Fail when try to remove mount point paths

2019-05-16 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841877#comment-16841877
 ] 

Akira Ajisaka commented on HDFS-13255:
--

wip-001 patch: 

RouterRpcServer#getLocationsForPath has {{failIfLocked}} argument but now it is 
not used. After this patch, as the JavaDoc says, the method throws IOException 
with a useful message when {{failIfLocked}} is true and the path is a top mount 
point.

Before this patch, some edit operations such as mkdir/setPermission/setOwner to 
a top mount point do not fail, but after the patch, they throw IOException. 
Some unit tests that rely on the previous behavior are failing after this 
patch, so we need to fix the tests if we are going to make this kind of change.

Any thoughts?

> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13255-HDFS-13891-wip-001.patch
>
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete each mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
> rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
> -skipTrash option
> [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
> rm: `hdfs://ns-fed/rm-test-all': Input/output error
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13255) RBF: Fail when try to remove mount point paths

2019-05-16 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13255:
-
Attachment: HDFS-13255-HDFS-13891-wip-001.patch

> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13255-HDFS-13891-wip-001.patch
>
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete each mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
> rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
> -skipTrash option
> [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
> rm: `hdfs://ns-fed/rm-test-all': Input/output error
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1065) OM and DN should persist SCM certificate as the trust root.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1065?focusedWorklogId=243796=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243796
 ]

ASF GitHub Bot logged work on HDDS-1065:


Author: ASF GitHub Bot
Created on: 17/May/19 02:55
Start Date: 17/May/19 02:55
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #754: HDDS-1065. OM 
and DN should persist SCM certificate as the trust root. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/754#discussion_r284963127
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/CertificateClient.java
 ##
 @@ -135,10 +135,11 @@ boolean verifySignature(byte[] data, byte[] signature,
*
* @param pemEncodedCert- pem encoded X509 Certificate
* @param force - override any existing file
+   * @param caCert- Is CA certificate.
* @throws CertificateException - on Error.
*
*/
-  void storeCertificate(String pemEncodedCert, boolean force)
+  void storeCertificate(String pemEncodedCert, boolean force, boolean caCert)
 
 Review comment:
   Agree, let's add a new function as @anuengineer suggested. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243796)
Time Spent: 1.5h  (was: 1h 20m)

> OM and DN should persist SCM certificate as the trust root.
> ---
>
> Key: HDDS-1065
> URL: https://issues.apache.org/jira/browse/HDDS-1065
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> OM and DN should persist SCM certificate as the trust root.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1538) Update ozone protobuf message for ACLs

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1538?focusedWorklogId=243793=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243793
 ]

ASF GitHub Bot logged work on HDDS-1538:


Author: ASF GitHub Bot
Created on: 17/May/19 02:47
Start Date: 17/May/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #828: HDDS-1538. Update 
ozone protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#issuecomment-493299384
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 417 | trunk passed |
   | +1 | compile | 206 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 832 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | trunk passed |
   | 0 | spotbugs | 237 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 418 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 408 | the patch passed |
   | +1 | compile | 212 | the patch passed |
   | +1 | cc | 212 | the patch passed |
   | +1 | javac | 212 | the patch passed |
   | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | the patch passed |
   | +1 | findbugs | 431 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 151 | hadoop-hdds in the patch failed. |
   | -1 | unit | 980 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 6866 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/828 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 0a46e72851d1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c183bd8 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/testReport/ |
   | Max. process+thread count | 5400 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/objectstore-service 
hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243793)
Time Spent: 20m  (was: 10m)

> Update ozone protobuf message for ACLs
> --
>
> Key: HDDS-1538
> URL: 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=243792=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243792
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 17/May/19 02:21
Start Date: 17/May/19 02:21
Worklog Time Spent: 10m 
  Work Description: linyiqun commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284958539
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
+final BatchOperation batchOperation = omMetadataManager.getStore()
+.initBatchOperation();
+
+readyBuffer.iterator().forEachRemaining((entry) -> {
+  try {
+entry.getResponse().addToRocksDBBatch(omMetadataManager,
+batchOperation);
+  } catch (IOException ex) {
+// During Adding to RocksDB batch entry got an exception.
+// We should terminate the OM.
+String message = "During flush to DB encountered error " +
+ex.getMessage();
+ExitUtil.terminate(1, message);
+  }
+});
+
+try {
+  omMetadataManager.getStore().commitBatchOperation(batchOperation);
+} catch (IOException ex) {
+  // During flush to rocksdb got an exception.
+  // We should terminate the OM.
+  String message = "During flush to DB encountered error " +
+  ex.getMessage();
+  ExitUtil.terminate(1, message);
+}
+
+int flushedTransactionsSize = readyBuffer.size();
+flushedTransactionCount.addAndGet(flushedTransactionsSize);
+flushIterations.incrementAndGet();
+
+LOG.info("Sync Iteration {} flushed transactions in this iteration{}",
+flushIterations.get(), flushedTransactionsSize);
+readyBuffer.clear();
+syncToDB = false;
+// TODO: update the last updated index in OzoneManagerStateMachine.
+  }
+}
+  }
+
+  /**
+   * Returns the flushed transaction count to OM DB.
+   * @return flushedTransactionCount

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=243791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243791
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 17/May/19 02:21
Start Date: 17/May/19 02:21
Worklog Time Spent: 10m 
  Work Description: linyiqun commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284957277
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
 
 Review comment:
   Actually I  mean not to start flushTransactions thread in OMDoubleBuffer. 
It's would  be  better to let the caller of function flushTransactions outside 
of the class OMDoubleBuffer.  How the flush transactions behavior should be 
triggered by outside not by itself I think.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243791)
Time Spent: 2h 10m  (was: 2h)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add 

[jira] [Work logged] (HDDS-1451) SCMBlockManager findPipeline and createPipeline are not lock protected

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1451?focusedWorklogId=243785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243785
 ]

ASF GitHub Bot logged work on HDDS-1451:


Author: ASF GitHub Bot
Created on: 17/May/19 01:51
Start Date: 17/May/19 01:51
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #799: HDDS-1451 : 
SCMBlockManager findPipeline and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#discussion_r284954692
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 ##
 @@ -182,18 +182,28 @@ public AllocatedBlock allocateBlock(final long size, 
ReplicationType type,
   pipelineManager
   .getPipelines(type, factor, Pipeline.PipelineState.OPEN,
   excludeList.getDatanodes(), excludeList.getPipelineIds());
-  Pipeline pipeline;
+  Pipeline pipeline = null;
   if (availablePipelines.size() == 0) {
 try {
   // TODO: #CLUTIL Remove creation logic when all replication types and
   // factors are handled by pipeline creator
   pipeline = pipelineManager.createPipeline(type, factor);
 } catch (IOException e) {
-  LOG.error("Pipeline creation failed for type:{} factor:{}",
+  LOG.warn("Pipeline creation failed for type:{} factor:{}",
   type, factor, e);
-  break;
+  LOG.info("Checking one more time for suitable pipelines");
+  availablePipelines = pipelineManager
+  .getPipelines(type, factor, Pipeline.PipelineState.OPEN,
+  excludeList.getDatanodes(), excludeList.getPipelineIds());
+  if (availablePipelines.size() == 0) {
+LOG.info("Could not find available pipeline even after trying " +
 
 Review comment:
   Same as above.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243785)
Time Spent: 50m  (was: 40m)

> SCMBlockManager findPipeline and createPipeline are not lock protected
> --
>
> Key: HDDS-1451
> URL: https://issues.apache.org/jira/browse/HDDS-1451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> SCM BlockManager may try to allocate pipelines in the cases when it is not 
> needed. This happens because BlockManagerImpl#allocateBlock is not lock 
> protected, so multiple pipelines can be allocated from it. One of the 
> pipeline allocation can fail even when one of the existing pipeline already 
> exists.
> {code}
> 2019-04-22 22:34:14,336 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 6f4bb2d7-d660-4f9f-bc06-72b10f9a738e, Nodes: 76e1a493-fd55-4d67-9f5
> 5-c04fd6bd3a33{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}2b9850b2-aed3-4a40-91b5-2447dc5246bf{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}12248721-ea6a-453f-8dad-fc7fbe692f
> d2{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,386 INFO  impl.RoleInfo 
> (RoleInfo.java:shutdownLeaderElection(134)) - 
> e17b7852-4691-40c7-8791-ad0b0da5201f: shutdown LeaderElection
> 2019-04-22 22:34:14,388 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 552e28f3-98d9-41f3-86e0-c1b9494838a5, Nodes: e17b7852-4691-40c7-879
> 1-ad0b0da5201f{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}fd365bac-e26e-4b11-afd8-9d08cd1b0521{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}9583a007-7f02-4074-9e26-19bc18e29e
> c5{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,388 INFO  impl.RoleInfo (RoleInfo.java:updateAndGet(143)) 
> - e17b7852-4691-40c7-8791-ad0b0da5201f: start FollowerState
> 2019-04-22 22:34:14,388 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 5383151b-d625-4362-a7dd-c0d353acaf76, Nodes: 80f16ad6-3879-4a64-a3c
> 7-7719813cc139{ip: 

[jira] [Work logged] (HDDS-1451) SCMBlockManager findPipeline and createPipeline are not lock protected

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1451?focusedWorklogId=243786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243786
 ]

ASF GitHub Bot logged work on HDDS-1451:


Author: ASF GitHub Bot
Created on: 17/May/19 01:51
Start Date: 17/May/19 01:51
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #799: HDDS-1451 : 
SCMBlockManager findPipeline and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#discussion_r284954644
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 ##
 @@ -182,18 +182,28 @@ public AllocatedBlock allocateBlock(final long size, 
ReplicationType type,
   pipelineManager
   .getPipelines(type, factor, Pipeline.PipelineState.OPEN,
   excludeList.getDatanodes(), excludeList.getPipelineIds());
-  Pipeline pipeline;
+  Pipeline pipeline = null;
   if (availablePipelines.size() == 0) {
 try {
   // TODO: #CLUTIL Remove creation logic when all replication types and
   // factors are handled by pipeline creator
   pipeline = pipelineManager.createPipeline(type, factor);
 } catch (IOException e) {
-  LOG.error("Pipeline creation failed for type:{} factor:{}",
+  LOG.warn("Pipeline creation failed for type:{} factor:{}",
   type, factor, e);
-  break;
+  LOG.info("Checking one more time for suitable pipelines");
 
 Review comment:
   Lets add a type and factor here as well.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243786)
Time Spent: 1h  (was: 50m)

> SCMBlockManager findPipeline and createPipeline are not lock protected
> --
>
> Key: HDDS-1451
> URL: https://issues.apache.org/jira/browse/HDDS-1451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> SCM BlockManager may try to allocate pipelines in the cases when it is not 
> needed. This happens because BlockManagerImpl#allocateBlock is not lock 
> protected, so multiple pipelines can be allocated from it. One of the 
> pipeline allocation can fail even when one of the existing pipeline already 
> exists.
> {code}
> 2019-04-22 22:34:14,336 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 6f4bb2d7-d660-4f9f-bc06-72b10f9a738e, Nodes: 76e1a493-fd55-4d67-9f5
> 5-c04fd6bd3a33{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}2b9850b2-aed3-4a40-91b5-2447dc5246bf{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}12248721-ea6a-453f-8dad-fc7fbe692f
> d2{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,386 INFO  impl.RoleInfo 
> (RoleInfo.java:shutdownLeaderElection(134)) - 
> e17b7852-4691-40c7-8791-ad0b0da5201f: shutdown LeaderElection
> 2019-04-22 22:34:14,388 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 552e28f3-98d9-41f3-86e0-c1b9494838a5, Nodes: e17b7852-4691-40c7-879
> 1-ad0b0da5201f{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}fd365bac-e26e-4b11-afd8-9d08cd1b0521{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}9583a007-7f02-4074-9e26-19bc18e29e
> c5{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,388 INFO  impl.RoleInfo (RoleInfo.java:updateAndGet(143)) 
> - e17b7852-4691-40c7-8791-ad0b0da5201f: start FollowerState
> 2019-04-22 22:34:14,388 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 5383151b-d625-4362-a7dd-c0d353acaf76, Nodes: 80f16ad6-3879-4a64-a3c
> 7-7719813cc139{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}082ce481-7fb0-4f88-ac21-82609290a6a2{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}dd5f5a70-0217-4577-b7a2-c42aa139d1
> 8a{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=243783=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243783
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 17/May/19 01:49
Start Date: 17/May/19 01:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493289186
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 76 | Maven dependency ordering for branch |
   | +1 | mvninstall | 421 | trunk passed |
   | +1 | compile | 196 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 877 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 121 | trunk passed |
   | 0 | spotbugs | 241 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 423 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 395 | the patch passed |
   | +1 | compile | 202 | the patch passed |
   | +1 | cc | 202 | the patch passed |
   | +1 | javac | 202 | the patch passed |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 60 | hadoop-ozone generated 4 new + 2 unchanged - 0 fixed = 
6 total (was 2) |
   | +1 | findbugs | 456 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 159 | hadoop-hdds in the patch failed. |
   | -1 | unit | 226 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 4726 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerStateMachine |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   |   | hadoop.ozone.om.TestOzoneManagerLock |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/827 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 1ebd0979f400 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c183bd8 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/testReport/ |
   | Max. process+thread count | 1187 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---


[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=243784=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243784
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 17/May/19 01:49
Start Date: 17/May/19 01:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#discussion_r284954493
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -313,6 +313,8 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
 RPC.setProtocolEngine(configuration, OzoneManagerProtocolPB.class,
 ProtobufRpcEngine.class);
 
+metadataManager = new OmMetadataManagerImpl(configuration);
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243784)
Time Spent: 1h 20m  (was: 1h 10m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14498) LeaseManager can loop forever on the file for which create has failed

2019-05-16 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841839#comment-16841839
 ] 

Sergey Shelukhin commented on HDFS-14498:
-

cc [~elgoiri] [~ashlhud] [~raviprak]

> LeaseManager can loop forever on the file for which create has failed 
> --
>
> Key: HDFS-14498
> URL: https://issues.apache.org/jira/browse/HDFS-14498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Sergey Shelukhin
>Priority: Major
>
> The logs from file creation are long gone due to infinite lease logging, 
> however it presumably failed... the client who was trying to write this file 
> is definitely long dead.
> The version includes HDFS-4882.
> We get this log pattern repeating infinitely:
> {noformat}
> 2019-05-16 14:00:16,893 INFO 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1] has expired hard 
> limit
> 2019-05-16 14:00:16,893 INFO 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1], src=
> 2019-05-16 14:00:16,893 WARN 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.internalReleaseLease: 
> Failed to release lease for file . Committed blocks are waiting to be 
> minimally replicated. Try again later.
> 2019-05-16 14:00:16,893 WARN 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Cannot release the path 
>  in the lease [Lease.  Holder: DFSClient_NONMAPREDUCE_-20898906_61, 
> pending creates: 1]. It will be retried.
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* 
> NameSystem.internalReleaseLease: Failed to release lease for file . 
> Committed blocks are waiting to be minimally replicated. Try again later.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3357)
>   at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:573)
>   at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:509)
>   at java.lang.Thread.run(Thread.java:745)
> $  grep -c "Recovering.*DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 
> 1" hdfs_nn*
> hdfs_nn.log:1068035
> hdfs_nn.log.2019-05-16-14:1516179
> hdfs_nn.log.2019-05-16-15:1538350
> {noformat}
> Aside from an actual bug fix, it might make sense to make LeaseManager not 
> log so much, in case if there are more bugs like this...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14498) LeaseManager can loop forever on the file for which create has failed

2019-05-16 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HDFS-14498:
---

 Summary: LeaseManager can loop forever on the file for which 
create has failed 
 Key: HDFS-14498
 URL: https://issues.apache.org/jira/browse/HDFS-14498
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.9.0
Reporter: Sergey Shelukhin


The logs from file creation are long gone due to infinite lease logging, 
however it presumably failed... the client who was trying to write this file is 
definitely long dead.
The version includes HDFS-4882.
We get this log pattern repeating infinitely:
{noformat}
2019-05-16 14:00:16,893 INFO 
[org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1] has expired hard limit
2019-05-16 14:00:16,893 INFO 
[org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1], src=
2019-05-16 14:00:16,893 WARN 
[org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.internalReleaseLease: 
Failed to release lease for file . Committed blocks are waiting to be 
minimally replicated. Try again later.
2019-05-16 14:00:16,893 WARN 
[org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Cannot release the path 
 in the lease [Lease.  Holder: DFSClient_NONMAPREDUCE_-20898906_61, 
pending creates: 1]. It will be retried.
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* 
NameSystem.internalReleaseLease: Failed to release lease for file . 
Committed blocks are waiting to be minimally replicated. Try again later.
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3357)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:573)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:509)
at java.lang.Thread.run(Thread.java:745)



$  grep -c "Recovering.*DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 
1" hdfs_nn*
hdfs_nn.log:1068035
hdfs_nn.log.2019-05-16-14:1516179
hdfs_nn.log.2019-05-16-15:1538350
{noformat}

Aside from an actual bug fix, it might make sense to make LeaseManager not log 
so much, in case if there are more bugs like this...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-05-16 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841838#comment-16841838
 ] 

Akira Ajisaka commented on HDFS-14090:
--

+1 for Approach 1. Thank you for the design doc and pinging me.

bq. For bad behaving namenodes or backfill jobs that put spiky loads on 
namenodes, more routers could potentially be added with a higher than usual 
handler count to deal with the surge in traffic if needed.

It would be very nice if DFSRouter is available via service discovery. 
(HADOOP-15774)

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: RBF_ Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1538) Update ozone protobuf message for ACLs

2019-05-16 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1538:
-
Status: Patch Available  (was: Open)

> Update ozone protobuf message for ACLs
> --
>
> Key: HDDS-1538
> URL: https://issues.apache.org/jira/browse/HDDS-1538
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Update ozone protobuf message for ACLs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1538) Update ozone protobuf message for ACLs

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1538:
-
Labels: pull-request-available  (was: )

> Update ozone protobuf message for ACLs
> --
>
> Key: HDDS-1538
> URL: https://issues.apache.org/jira/browse/HDDS-1538
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>
> Update ozone protobuf message for ACLs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1538) Update ozone protobuf message for ACLs

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1538?focusedWorklogId=243764=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243764
 ]

ASF GitHub Bot logged work on HDDS-1538:


Author: ASF GitHub Bot
Created on: 17/May/19 00:52
Start Date: 17/May/19 00:52
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #828: HDDS-1538. 
Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243764)
Time Spent: 10m
Remaining Estimate: 0h

> Update ozone protobuf message for ACLs
> --
>
> Key: HDDS-1538
> URL: https://issues.apache.org/jira/browse/HDDS-1538
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Update ozone protobuf message for ACLs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=243763=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243763
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 17/May/19 00:43
Start Date: 17/May/19 00:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493278407
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 382 | trunk passed |
   | +1 | compile | 202 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 814 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 125 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 413 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 391 | the patch passed |
   | +1 | compile | 212 | the patch passed |
   | +1 | javac | 212 | the patch passed |
   | +1 | checkstyle | 70 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 664 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 63 | hadoop-ozone generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) |
   | +1 | findbugs | 430 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 141 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1151 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 7128 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux c15ecb89aaa8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c183bd8 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/testReport/ |
   | Max. process+thread count | 4528 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243763)
Time Spent: 4h  (was: 3h 50m)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
>  

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=243756=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243756
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 17/May/19 00:17
Start Date: 17/May/19 00:17
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on 
[HDDS-1499](https://issues.apache.org/jira/browse/HDDS-1499) and 
[HDDS-1512](https://issues.apache.org/jira/browse/HDDS-1512). This PR has 
commits from HDDS-1499 and HDDS-1512.
   
   **Note for reviewers:**
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design and refactor approach, so that 
similar can be followed for other requests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243756)
Time Spent: 1h  (was: 50m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=243755=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243755
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 17/May/19 00:16
Start Date: 17/May/19 00:16
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on HDDS-1499 and HDDS-1512. This PR has commits from 
HDDS-1499 and HDDS-1512.
   
   **Note for reviewers:**
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design and refactor approach, so that 
similar can be followed for other requests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243755)
Time Spent: 50m  (was: 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=243751=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243751
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 17/May/19 00:14
Start Date: 17/May/19 00:14
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on HDDS-1499 and HDDS-1512.
   
   **Note for reviewers:**
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design and refactor approach, so that 
similar can be followed for other requests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243751)
Time Spent: 40m  (was: 0.5h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=243750=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243750
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 17/May/19 00:13
Start Date: 17/May/19 00:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on HDDS-1499 and HDDS-1512.
   
   **Note for reviewers:**
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design approach, so that similar can be 
followed for other requests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243750)
Time Spent: 0.5h  (was: 20m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1551:
-
Fix Version/s: 0.5.0
   Status: Patch Available  (was: Open)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=243749=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243749
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 17/May/19 00:12
Start Date: 17/May/19 00:12
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #827: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on HDDS-1499 and HDDS-1512.
   
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design approach, so that similar can be 
followed for other requests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243749)
Time Spent: 20m  (was: 10m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=243748=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243748
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 17/May/19 00:10
Start Date: 17/May/19 00:10
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #827: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243748)
Time Spent: 10m
Remaining Estimate: 0h

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1551:
-
Labels: pull-request-available  (was: )

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841818#comment-16841818
 ] 

Eric Yang commented on HDDS-1458:
-

[~anu] {quote}Sorry this makes no sense. If you want simulate disk failures, it 
is easier with a container based approach and it really does not matter where 
the containers run. {quote}

Some disk tests can not be simulated without distributing IO to multiple disks. 
 A single node will have all containers running from the same disk.  It is 
harder to simulate isolated fault injection to scm disk when all containers 
share the same physical disk IMHO, but the conversation is digressing from 
layout of current tool chain.  

I will make the new tests in the tarball when tests are written.  I don't have 
a way to do that in this patch because this patch only setup the module layout. 
 In the follow up patch, I will worry about race condition between compiling 
test code and packaging the test code in tarball.  This means junit based java 
test can not be used.  I am less thrill to look at test reports on terminal 
console than looking them through Jenkins, but tests can be part of tarball.  
Maven and Robot Test framework can do exactly the same thing for keyword based 
tests.  I don't know why we choose to do both when Maven is already used daily. 
 I am not going to interrupt the plan and will make the tests to support both 
maven and robots framework.  Does this sound fair?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841807#comment-16841807
 ] 

Hadoop QA commented on HDFS-12979:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 325 unchanged - 6 fixed = 330 total (was 331) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-12979 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968957/HDFS-12979.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e6048c99a7b8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 03ea8ea |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26798/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26798/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=243721=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243721
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 16/May/19 23:13
Start Date: 16/May/19 23:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493262095
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 369 | trunk passed |
   | +1 | compile | 190 | trunk passed |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 759 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 127 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 413 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 398 | the patch passed |
   | +1 | compile | 208 | the patch passed |
   | +1 | javac | 208 | the patch passed |
   | +1 | checkstyle | 58 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 64 | hadoop-ozone generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) |
   | +1 | findbugs | 432 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 134 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1146 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5356 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 53e772f24a15 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fab5b80 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/testReport/ |
   | Max. process+thread count | 5401 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243721)
Time Spent: 3h 50m  (was: 3h 40m)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
>

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=243720=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243720
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 16/May/19 23:09
Start Date: 16/May/19 23:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493261366
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 527 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 1 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 400 | trunk passed |
   | +1 | compile | 198 | trunk passed |
   | +1 | checkstyle | 46 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 766 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 122 | trunk passed |
   | 0 | spotbugs | 239 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 421 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 389 | the patch passed |
   | +1 | compile | 209 | the patch passed |
   | +1 | javac | 209 | the patch passed |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 64 | hadoop-ozone generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) |
   | +1 | findbugs | 433 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 136 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1121 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5893 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 8465f3a9d2d6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fab5b80 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/testReport/ |
   | Max. process+thread count | 4715 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243720)
Time Spent: 3h 40m  (was: 3.5h)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>

[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-05-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841794#comment-16841794
 ] 

Íñigo Goiri commented on HDFS-14090:


I think that Approach 1 is the natural extension of the current model to manage 
this scenario.
In the past few months I've been doing some changes to support unavailable 
subclusters.
I think this approach would be able to handle this a little more cleanly.

It would be nice to have a summary of where the main code changes would be.
It looks to me that we will have to do some change to the commons part for both 
RPC client and server.

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: RBF_ Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841790#comment-16841790
 ] 

Anu Engineer commented on HDDS-1458:


{quote}The current test does not spawn real cluster. The current approach has 
two limitations that tie the tests into a single node.
{quote}
It simulates a cluster via launching a set of containers. From Ozone's 
perspective is a cluster.
{quote}Blockade [does not support docker 
swarm|https://blockade.readthedocs.io/en/latest/install.html] to test real 
distributed cluster.
{quote}
Again you misunderstood me, that is not what I am saying, I am saying from 
Ozone's point of view, a set of containers looks and feels like a set of 
processes running on different machines.
{quote}This is the reason that I was not planning to expose the disk tests 
using the current design beyond development environment.

 
{quote}
Sorry this makes no sense. If you want simulate disk failures, it is easier 
with a container based approach and it really does not matter where the 
containers run. I would suggest that you make this running on a single machine 
first, and allowing the current tool chain to work. Thanks

 

 

 

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841788#comment-16841788
 ] 

Eric Yang commented on HDDS-1458:
-

[~anu] The current test does not spawn real cluster.  The current approach has 
two limitations that tie the tests into a single node.  1.  ozone binary is not 
in docker container image.  There is no mechanism to distribute ozone binary to 
real cluster.  2.  Blockade [does not support docker 
swarm|https://blockade.readthedocs.io/en/latest/install.html] to test real 
distributed cluster.  The current design can work on a single node with 
multiple containers.  We opened HDDS-1495 to address docker image issue to 
ensure that we can distribute docker images to multiple nodes for the real test 
to happen.  This is the reason that I was not planning to expose the disk tests 
using the current design beyond development environment.  I will put the tests 
in tarball, when that content is written.

[~elek] {quote}Can you please update the patch with support this with the new 
tests as well?{quote}

The content of disk tests is a follow up patch, and I will add the scripts to 
tarball when the time comes.  Does this work for you?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2019-05-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841787#comment-16841787
 ] 

Íñigo Goiri commented on HDFS-14353:


Do you mind taking care of the checkstyle?

> Erasure Coding: metrics xmitsInProgress become to negative.
> ---
>
> Key: HDFS-14353
> URL: https://issues.apache.org/jira/browse/HDFS-14353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, erasure-coding
>Affects Versions: 3.3.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14353.001.patch, HDFS-14353.002.patch, 
> HDFS-14353.003.patch, HDFS-14353.004.patch, HDFS-14353.005.patch, 
> HDFS-14353.006.patch, HDFS-14353.007.patch, HDFS-14353.008.patch, 
> screenshot-1.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-16 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian resolved HDDS-1474.
--
   Resolution: Fixed
Fix Version/s: 0.4.1

The issue cannot be reproduced. [~swagle] [~avijayan] Please reopen if this can 
be reproduced

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841775#comment-16841775
 ] 

Hudson commented on HDDS-1527:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16566 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16566/])
HDDS-1527. HDDS Datanode start fails due to datanode.id file read error. (xyao: 
rev c183bd8e2009c41ca9bdde964ec7e428dacc0c03)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java


> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-05-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841776#comment-16841776
 ] 

Anu Engineer commented on HDFS-14234:
-

Probably an issue with the Jenkins machines. Some times that can be quite slow. 
I will take a look at this patch soon. Thanks


> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0003-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0004-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0005-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0006-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0007-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=243694=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243694
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 16/May/19 22:20
Start Date: 16/May/19 22:20
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493251510
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243694)
Time Spent: 3.5h  (was: 3h 20m)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-16 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841771#comment-16841771
 ] 

Siddharth Wagle commented on HDDS-1527:
---

Thanks for committing the patch [~xyao].

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841770#comment-16841770
 ] 

Anu Engineer commented on HDDS-1458:


While applauding your valiant efforts to reduce manual steps; I have a current 
workflow that works very well. I depend on python and robots tests for 
end-to-end verification; which launches real clusters and tries out all kinds 
of commands. This workflow is critical to me today. So unless the new way and 
the old way are identical in the tests they run and also has the ability to 
stay that way in foreseeable future, let us have one consistent way. If I had 
to make a choice, I will choose what we have today.

It looks as if what we are asking is hard for you to do, and we are saying that 
after GA we might remove these tests; so asking you to do this extra work also 
does not seem to be fair. 

Do you think we should take a pause on this work for time being ? and explore 
this later in the cycle? 


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1527:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~swagle] for the contribution. I've committed the patch to trunk. 

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?focusedWorklogId=243684=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243684
 ]

ASF GitHub Bot logged work on HDDS-1527:


Author: ASF GitHub Bot
Created on: 16/May/19 22:13
Start Date: 16/May/19 22:13
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #822: HDDS-1527. 
HDDS Datanode start fails due to datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243684)
Time Spent: 1h 20m  (was: 1h 10m)

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1495) Create hadoop/ozone docker images with inline build process

2019-05-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841766#comment-16841766
 ] 

Anu Engineer commented on HDDS-1495:


bq. Release manager can build docker image and upload to docker hub from the 
same source code with synchronized versioning. Elek, Marton Anu Engineer Do you 
agree that this is practical solution to resolve the source code repository 
fragmentation issue?


Nope, does not work. Release managers change all the time and Apache Infra 
manager the Apache Account in the DockerHub. So we cannot push anything into 
docker hub. We built the current hadoop-runner approach based on what Apache 
Infra told us. They have GitHub hooks to rebuild and update dockerhub images, 
also Binary releases are just convenience artifacts and having them in 
DockerHub allows us to run Ozone for testing easily. 

If you really want to use docker to run Ozone, you can use the k8s support that 
is being worked on. 

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HDDS-1495
> URL: https://issues.apache.org/jira/browse/HDDS-1495
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch, 
> HDDS-1495.003.patch, HDDS-1495.004.patch, HDDS-1495.005.patch, 
> HDDS-1495.006.patch, HDDS-1495.007.patch, Hadoop Docker Image inline build 
> process.pdf
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=243675=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243675
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 16/May/19 22:05
Start Date: 16/May/19 22:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493248035
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 414 | trunk passed |
   | +1 | compile | 203 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 842 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 118 | trunk passed |
   | 0 | spotbugs | 235 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 412 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 397 | the patch passed |
   | +1 | compile | 209 | the patch passed |
   | +1 | javac | 209 | the patch passed |
   | +1 | checkstyle | 59 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 601 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 59 | hadoop-ozone generated 6 new + 2 unchanged - 0 fixed = 
8 total (was 2) |
   | -1 | findbugs | 152 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 124 | hadoop-hdds in the patch failed. |
   | -1 | unit | 99 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 4276 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5fad57693ba1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fab5b80 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243675)
Time Spent: 3h 20m  (was: 3h 10m)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> 

[jira] [Work logged] (HDDS-1527) HDDS Datanode start fails due to datanode.id file read errors

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1527?focusedWorklogId=243677=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243677
 ]

ASF GitHub Bot logged work on HDDS-1527:


Author: ASF GitHub Bot
Created on: 16/May/19 22:09
Start Date: 16/May/19 22:09
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #822: HDDS-1527. HDDS 
Datanode start fails due to datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822#issuecomment-493248926
 
 
   +1, Thanks @swagle  for the contribution. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243677)
Time Spent: 1h 10m  (was: 1h)

> HDDS Datanode start fails due to datanode.id file read errors
> -
>
> Key: HDDS-1527
> URL: https://issues.apache.org/jira/browse/HDDS-1527
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> * Ozone datanode start fails when there is an existing Datanode.id file which 
> is non yaml format. (Yaml format was added through HDDS-1473.).
> * Further, when 'ozone.scm.datanode.id' is not configured, the datanode.id 
> file is created in a different directory than the fallback dir 
> (ozone.metadata.dirs). Restart fails since it looks for the datanode.id in 
> ozone.metadata.dirs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=243667=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243667
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 16/May/19 21:46
Start Date: 16/May/19 21:46
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #819:  HDDS-1501 
: Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284910792
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  

[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=243668=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243668
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 16/May/19 21:46
Start Date: 16/May/19 21:46
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #819:  HDDS-1501 
: Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284910869
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  

[jira] [Commented] (HDFS-14431) RBF: Rename with multiple subclusters should fail if no eligible locations

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841744#comment-16841744
 ] 

Hadoop QA commented on HDFS-14431:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 8s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 7 unchanged - 0 fixed = 10 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 34s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRename |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14431 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968958/HDFS-14431-HDFS-13891.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e521e0b5910d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 09f39bf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26799/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26799/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841742#comment-16841742
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
2s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m  
6s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
5s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
3s{color} | {color:red} The patch generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
10s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m 
10s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 2s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
11s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
20s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 43s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2696/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1458 

[jira] [Comment Edited] (HDDS-1495) Create hadoop/ozone docker images with inline build process

2019-05-16 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841733#comment-16841733
 ] 

Elek, Marton edited comment on HDDS-1495 at 5/16/19 9:27 PM:
-

Thanks [~ccondit] the ideas. Ozone can be run as a plugin or as standalone. 
Usually we test is as standalone but we definitely need some way to test it 
together with older hadoop version. (For example to be sure that ozonefs works 
well with older hadoop version, and spark and hive versions, or -- as you wrote 
-- test the plugin approach)

I didn't think about your idea until now but it could work. It may require many 
different containers (like a matrix build: ozone (0.3.0,.0.4.0) + hadoop 
(2.7.3,2.7.0,2.8.0) which can be solved in different way (for example with many 
different branches or with some kind of matrix builds)


was (Author: elek):
Thanks [~ccondit] the ideas. Ozone can be run as a plugin or as standalone. 
Usually we test is as standalone but we definitely need some way to test it 
together with older hadoop version. (For example to be sure that ozonefs works 
well with older hadoop version, and spark and hive versions, or -- as you wrote 
-- test the plugin approach)

I didn't think about your idea until now but it could work. It may require many 
different containers (like a matrix build: ozone (0.3.0,.0.4.0) + hadoop 
(2.7.3,2.7.0,2.8.0) which can be solved in different way (for example with many 
different patches or with some kind of matrix builds)

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HDDS-1495
> URL: https://issues.apache.org/jira/browse/HDDS-1495
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch, 
> HDDS-1495.003.patch, HDDS-1495.004.patch, HDDS-1495.005.patch, 
> HDDS-1495.006.patch, HDDS-1495.007.patch, Hadoop Docker Image inline build 
> process.pdf
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDDS-1495) Create hadoop/ozone docker images with inline build process

2019-05-16 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841733#comment-16841733
 ] 

Elek, Marton commented on HDDS-1495:


Thanks [~ccondit] the ideas. Ozone can be run as a plugin or as standalone. 
Usually we test is as standalone but we definitely need some way to test it 
together with older hadoop version. (For example to be sure that ozonefs works 
well with older hadoop version, and spark and hive versions, or -- as you wrote 
-- test the plugin approach)

I didn't think about your idea until now but it could work. It may require many 
different containers (like a matrix build: ozone (0.3.0,.0.4.0) + hadoop 
(2.7.3,2.7.0,2.8.0) which can be solved in different way (for example with many 
different patches or with some kind of matrix builds)

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HDDS-1495
> URL: https://issues.apache.org/jira/browse/HDDS-1495
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch, 
> HDDS-1495.003.patch, HDDS-1495.004.patch, HDDS-1495.005.patch, 
> HDDS-1495.006.patch, HDDS-1495.007.patch, Hadoop Docker Image inline build 
> process.pdf
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1495) Create hadoop/ozone docker images with inline build process

2019-05-16 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841728#comment-16841728
 ] 

Elek, Marton commented on HDDS-1495:


Sorry, I don't believe that "source code repository fragmentation" is a 
problem. I think it worked well until now.

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HDDS-1495
> URL: https://issues.apache.org/jira/browse/HDDS-1495
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch, 
> HDDS-1495.003.patch, HDDS-1495.004.patch, HDDS-1495.005.patch, 
> HDDS-1495.006.patch, HDDS-1495.007.patch, Hadoop Docker Image inline build 
> process.pdf
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841727#comment-16841727
 ] 

Elek, Marton commented on HDDS-1458:


We agreed to support both the approaches. The old way is to include all the 
tests (including the new and future tests) in the distribution package and make 
it possible to run them from the distribution package (in case of the blockade 
test, only with python without mvn).

Can you please update the patch with support this with the new tests as well?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=243636=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243636
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 16/May/19 21:08
Start Date: 16/May/19 21:08
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284898978
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconDBUpdateTask.java
 ##
 @@ -0,0 +1,66 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import java.util.Collection;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+
+/**
+ * Abstract class used to denote a Recon task that needs to act on OM DB 
events.
+ */
+public abstract class ReconDBUpdateTask {
+
+  private String taskName;
+
+  protected ReconDBUpdateTask(String taskName) {
+this.taskName = taskName;
+  }
+
+  /**
+   * Return task name.
+   * @return task name
+   */
+  public String getTaskName() {
+return taskName;
+  }
+
+  /**
+   * Return the list of tables that the task is listening on.
+   * Empty list means the task is NOT listening on any tables.
+   * @return Collection of Tables.
+   */
+  protected abstract Collection getTablesListeningOn();
 
 Review comment:
   Weird naming convention. What about getTaskTables()?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243636)
Time Spent: 2h 50m  (was: 2h 40m)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14431) RBF: Rename with multiple subclusters should fail if no eligible locations

2019-05-16 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14431:
---
Attachment: HDFS-14431-HDFS-13891.006.patch

> RBF: Rename with multiple subclusters should fail if no eligible locations
> --
>
> Key: HDFS-14431
> URL: https://issues.apache.org/jira/browse/HDFS-14431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14431-HDFS-13891.001.patch, 
> HDFS-14431-HDFS-13891.002.patch, HDFS-14431-HDFS-13891.003.patch, 
> HDFS-14431-HDFS-13891.004.patch, HDFS-14431-HDFS-13891.005.patch, 
> HDFS-14431-HDFS-13891.006.patch
>
>
> Currently, the rename will fail with FileNotFoundException which is not clear 
> to the user.
> The operation should fail stating the reason is that there are no eligible 
> destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-05-16 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841693#comment-16841693
 ] 

Chen Liang commented on HDFS-12979:
---

Thanks for the discussion Plamen, Erik! Good point to bear in mind.

Post v007 patch to fix the failed tests, as well as findbugs warnings. The 
issue was that with the new patch, ImageServlet would reject a image request if 
it has too small delta (in terms of both time and txid). But a number of 
existing tests rely on submitting small delta images to work. So I added a 
helper flag to allow unit tests to explicitly bypass this check.



> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch, HDFS-12979.007.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-05-16 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12979:
--
Attachment: HDFS-12979.007.patch

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch, HDFS-12979.007.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1451) SCMBlockManager findPipeline and createPipeline are not lock protected

2019-05-16 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1451:

Status: Patch Available  (was: Open)

> SCMBlockManager findPipeline and createPipeline are not lock protected
> --
>
> Key: HDDS-1451
> URL: https://issues.apache.org/jira/browse/HDDS-1451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> SCM BlockManager may try to allocate pipelines in the cases when it is not 
> needed. This happens because BlockManagerImpl#allocateBlock is not lock 
> protected, so multiple pipelines can be allocated from it. One of the 
> pipeline allocation can fail even when one of the existing pipeline already 
> exists.
> {code}
> 2019-04-22 22:34:14,336 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 6f4bb2d7-d660-4f9f-bc06-72b10f9a738e, Nodes: 76e1a493-fd55-4d67-9f5
> 5-c04fd6bd3a33{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}2b9850b2-aed3-4a40-91b5-2447dc5246bf{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}12248721-ea6a-453f-8dad-fc7fbe692f
> d2{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,386 INFO  impl.RoleInfo 
> (RoleInfo.java:shutdownLeaderElection(134)) - 
> e17b7852-4691-40c7-8791-ad0b0da5201f: shutdown LeaderElection
> 2019-04-22 22:34:14,388 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 552e28f3-98d9-41f3-86e0-c1b9494838a5, Nodes: e17b7852-4691-40c7-879
> 1-ad0b0da5201f{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}fd365bac-e26e-4b11-afd8-9d08cd1b0521{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}9583a007-7f02-4074-9e26-19bc18e29e
> c5{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,388 INFO  impl.RoleInfo (RoleInfo.java:updateAndGet(143)) 
> - e17b7852-4691-40c7-8791-ad0b0da5201f: start FollowerState
> 2019-04-22 22:34:14,388 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 5383151b-d625-4362-a7dd-c0d353acaf76, Nodes: 80f16ad6-3879-4a64-a3c
> 7-7719813cc139{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}082ce481-7fb0-4f88-ac21-82609290a6a2{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}dd5f5a70-0217-4577-b7a2-c42aa139d1
> 8a{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,389 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> be4854e5-7933-4caa-b32e-f482cf500247, Nodes: 6e2356f1-479d-498b-876
> a-1c90623c498b{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}8ac46d94-9975-4eea-9448-2618c69d7bf3{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}a3ed36a1-44ca-47b2-b9b3-5aeef04595
> 18{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,390 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 21e368e2-f82a-4c61-9cc3-06e8de22ea6b, Nodes: 
> 82632040-5754-4122-b187-331879586842{ip: 192.168.0.104, host: 192.168.0.104, 
> certSerialId: null}923c8537-b869-4085-adcb-0a9accdcd089{ip: 192.168.0.104, 
> host: 192.168.0.104, certSerialId: 
> null}c6d790bf-e3a6-4064-acb5-f74796cd38a9{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}, Type:RATIS, Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,390 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> cccbc2ed-e0e2-4578-a8a2-94f4b645be52, Nodes: 
> 91ae6848-a778-43be-a4a1-5855f7adc0d8{ip: 192.168.0.104, host: 192.168.0.104, 
> certSerialId: null}8f330a03-40e2-4bd1-9b43-5e05b13d89f0{ip: 192.168.0.104, 
> host: 192.168.0.104, certSerialId: 
> null}4f3070dc-650b-48d7-87b5-d2076104e7b4{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}, Type:RATIS, Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,392 ERROR block.BlockManagerImpl 
> (BlockManagerImpl.java:allocateBlock(192)) - Pipeline creation failed for 
> type:RATIS factor:THREE
> org.apache.hadoop.hdds.scm.pipeline.InsufficientDatanodesException: Cannot 
> create pipeline of factor 3 using 2 nodes 20 healthy nodes 20 all 

[jira] [Updated] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-16 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1501:

Status: Patch Available  (was: In Progress)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14323) Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters in hdfs file path

2019-05-16 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey reassigned HDFS-14323:
---

Assignee: Srinivasu Majeti

> Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters 
> in hdfs file path
> ---
>
> Key: HDFS-14323
> URL: https://issues.apache.org/jira/browse/HDFS-14323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
> Attachments: HDFS-14323v0.patch
>
>
> There was an enhancement to allow semicolon in source/target URLs for distcp 
> use case as part of HDFS-13176 and backward compatibility fix as part of 
> HDFS-13582 . Still there seems to be an issue when trying to trigger distcp 
> from 3.x cluster to pull webhdfs data from 2.x hadoop cluster. We might need 
> to deal with existing fix as described below by making sure if url is already 
> encoded or not. That fixes it. 
> diff --git 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
>  
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> index 5936603c34a..dc790286aff 100644
> --- 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> +++ 
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> @@ -609,7 +609,10 @@ URL toUrl(final HttpOpParam.Op op, final Path fspath,
>  boolean pathAlreadyEncoded = false;
>  try {
>  fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
> - pathAlreadyEncoded = true;
> + if(!fspathUri.getPath().equals(fspathUriDecoded))
> + {
> + pathAlreadyEncoded = true;
> + }
>  } catch (IllegalArgumentException ex) {
>  LOG.trace("Cannot decode URL encoded file", ex);
>  }
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1551:
-
Component/s: Ozone Manager

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1551:
-
Issue Type: Sub-task  (was: Task)
Parent: HDDS-505

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-16 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1551:


 Summary: Implement Bucket Write Requests to use Cache and 
DoubleBuffer
 Key: HDDS-1551
 URL: https://issues.apache.org/jira/browse/HDDS-1551
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Implement Bucket write requests to use OM Cache, double buffer.

And also in OM previously we used to Ratis client for communication to Ratis 
server, instead of that use Ratis server API's.

 

In this Jira will add the changes to implement bucket operations, and HA/Non-HA 
will have a different code path, but once all requests are implemented will 
have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243566=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243566
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:44
Start Date: 16/May/19 18:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284847079
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineFactory.java
 ##
 @@ -61,4 +61,13 @@ public Pipeline create(ReplicationType type, 
ReplicationFactor factor,
   List nodes) {
 return providers.get(type).create(factor, nodes);
   }
+
+  @VisibleForTesting
+  public PipelineProvider getProvider(ReplicationType type) {
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243566)
Time Spent: 10.5h  (was: 10h 20m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243564=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243564
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:42
Start Date: 16/May/19 18:42
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #714: HDDS-1406. 
Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-493185963
 
 
   Thank You @lokeshj1703 for the review.
   I have addressed the review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243564)
Time Spent: 10h 20m  (was: 10h 10m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243559=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243559
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:38
Start Date: 16/May/19 18:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284844530
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestory.java
 ##
 @@ -97,8 +97,11 @@ public void testPipelineCreationOnNodeRestart() throws 
Exception {
 }
 
 // try creating another pipeline now
+RatisPipelineProvider ratisPipelineProvider = (RatisPipelineProvider)
+pipelineManager.getPipelineFactory().getProvider(
+HddsProtos.ReplicationType.RATIS);
 try {
-  RatisPipelineUtils.createPipeline(pipelines.get(0), conf);
+  ratisPipelineProvider.createPipeline(pipelines.get(0));
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243559)
Time Spent: 10h 10m  (was: 10h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243543=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243543
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:23
Start Date: 16/May/19 18:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838490
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
 ##
 @@ -1017,6 +1017,9 @@ public void stop() {
 } catch (Exception ex) {
   LOG.error("SCM Metadata store stop failed", ex);
 }
+
+// shutdown pipeline provider.
+pipelineManager.getPipelineFactory().shutdown();
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243543)
Time Spent: 9h 50m  (was: 9h 40m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243544=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243544
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:23
Start Date: 16/May/19 18:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838741
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/MockRatisPipelineProvider.java
 ##
 @@ -37,4 +37,9 @@ public MockRatisPipelineProvider(NodeManager nodeManager,
   protected void initializePipeline(Pipeline pipeline) throws IOException {
 // do nothing as the datanodes do not exists
   }
+
+  @Override
+  public void shutdown() {
+
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243544)
Time Spent: 10h  (was: 9h 50m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243542=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243542
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:23
Start Date: 16/May/19 18:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838447
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
 ##
 @@ -72,4 +72,9 @@ public Pipeline create(ReplicationFactor factor,
 .setNodes(nodes)
 .build();
   }
+
+  @Override
+  public void shutdown() {
+
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243542)
Time Spent: 9h 40m  (was: 9.5h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243539=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243539
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:22
Start Date: 16/May/19 18:22
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838122
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -133,7 +173,86 @@ public Pipeline create(ReplicationFactor factor,
 .build();
   }
 
+
+  @Override
+  public void shutdown() {
+forkJoinPool.shutdownNow();
+  }
+
   protected void initializePipeline(Pipeline pipeline) throws IOException {
-RatisPipelineUtils.createPipeline(pipeline, conf);
+createPipeline(pipeline);
+  }
+
+  /**
+   * Sends ratis command to create pipeline on all the datanodes.
+   *
+   * @param pipeline  - Pipeline to be created
+   * @throws IOException if creation fails
+   */
+  public void createPipeline(Pipeline pipeline)
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243539)
Time Spent: 9h 20m  (was: 9h 10m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243540=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243540
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:22
Start Date: 16/May/19 18:22
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838303
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -346,6 +347,11 @@ public void triggerPipelineCreation() {
 backgroundPipelineCreator.triggerPipelineCreation();
   }
 
+  @Override
+  public PipelineFactory getPipelineFactory() {
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243540)
Time Spent: 9.5h  (was: 9h 20m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1548) Jenkins precommit build is broken for Ozone

2019-05-16 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HDDS-1548.
-
Resolution: Fixed

Jenkins Precommit build updated to reflect the changes.

> Jenkins precommit build is broken for Ozone
> ---
>
> Key: HDDS-1548
> URL: https://issues.apache.org/jira/browse/HDDS-1548
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
>
> HDDS Jenkins precommit build has been broken since Build 2685 Date May 13, 
> 2019 11:00:40PM.  It looks like the precommit build depends on Yetus trunk.  
> This is extremely risky when Yetus trunk breaks, it also breaks precommit 
> build for Ozone.  Precommit build must use a released version of Yetus to 
> prevent cascaded regression.
> A second problem is the precommit build also depends on Marton's own personal 
> website to download ozone.sh.  It would be best to version control ozone.sh 
> in hadoop-ozone/dev-support directory to prevent unpredictable changes to 
> ozone.sh at different time, which can make precommit build report 
> indeterministic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-05-16 Thread Clay B. (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841617#comment-16841617
 ] 

Clay B. commented on HDFS-14234:


I do not see TestWebHdfsTimeouts failing on my laptop. I do see Jenkins says:
{\{Expected to find 'localhost:42528: connect timed out' but got unexpected 
exception: java.net.SocketTimeoutException: localhost:42528: Read timed out}}. 
I am not sure how I could have affected this test?

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0003-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0004-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0005-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0006-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0007-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243537=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243537
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:21
Start Date: 16/May/19 18:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284837740
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
 ##
 @@ -75,4 +75,6 @@ void finalizeAndDestroyPipeline(Pipeline pipeline, boolean 
onTimeout)
   void startPipelineCreator();
 
   void triggerPipelineCreation();
+
+  PipelineFactory getPipelineFactory();
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243537)
Time Spent: 9h 10m  (was: 9h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243536=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243536
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:21
Start Date: 16/May/19 18:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284837697
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -24,35 +24,75 @@
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.client.HddsClientUtils;
 import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.ContainerPlacementPolicy;
 import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRandom;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline.PipelineState;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.ratis.RatisHelper;
+import org.apache.ratis.client.RaftClient;
+import org.apache.ratis.grpc.GrpcTlsConfig;
+import org.apache.ratis.protocol.RaftClientReply;
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.protocol.RaftPeer;
+import org.apache.ratis.retry.RetryPolicy;
+import org.apache.ratis.rpc.SupportedRpcType;
+import org.apache.ratis.util.TimeDuration;
+import org.apache.ratis.util.function.CheckedBiConsumer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
+import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.ForkJoinWorkerThread;
+import java.util.concurrent.RejectedExecutionException;
 import java.util.stream.Collectors;
 
 /**
  * Implements Api for creating ratis pipelines.
  */
 public class RatisPipelineProvider implements PipelineProvider {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RatisPipelineProvider.class);
+
   private final NodeManager nodeManager;
   private final PipelineStateManager stateManager;
   private final Configuration conf;
 
+  // Set parallelism at 3, as now in Ratis we create 1 and 3 node pipelines.
+  private final int parallelisimForPool = 3;
+
+  private final ForkJoinPool.ForkJoinWorkerThreadFactory factory =
+  (pool -> {
+final ForkJoinWorkerThread worker = ForkJoinPool.
+defaultForkJoinWorkerThreadFactory.newThread(pool);
+worker.setName("ratisCreatePipeline" + worker.getPoolIndex());
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243536)
Time Spent: 9h  (was: 8h 50m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1548) Jenkins precommit build is broken for Ozone

2019-05-16 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned HDDS-1548:
---

Assignee: Eric Yang

> Jenkins precommit build is broken for Ozone
> ---
>
> Key: HDDS-1548
> URL: https://issues.apache.org/jira/browse/HDDS-1548
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
>
> HDDS Jenkins precommit build has been broken since Build 2685 Date May 13, 
> 2019 11:00:40PM.  It looks like the precommit build depends on Yetus trunk.  
> This is extremely risky when Yetus trunk breaks, it also breaks precommit 
> build for Ozone.  Precommit build must use a released version of Yetus to 
> prevent cascaded regression.
> A second problem is the precommit build also depends on Marton's own personal 
> website to download ozone.sh.  It would be best to version control ozone.sh 
> in hadoop-ozone/dev-support directory to prevent unpredictable changes to 
> ozone.sh at different time, which can make precommit build report 
> indeterministic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=243529=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243529
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 16/May/19 18:20
Start Date: 16/May/19 18:20
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284837312
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -133,7 +173,86 @@ public Pipeline create(ReplicationFactor factor,
 .build();
   }
 
+
+  @Override
+  public void shutdown() {
+forkJoinPool.shutdownNow();
 
 Review comment:
   This is done based on arpit's comment, as on an unclean shutdown this 
terminate abruptly. So we can use shutdownNow(), instead of awaitTermination in 
normal case too.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243529)
Time Spent: 8h 50m  (was: 8h 40m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841615#comment-16841615
 ] 

Hadoop QA commented on HDDS-1458:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDDS-Build/2696/console in case of 
problems.


> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14497) Write lock hold by metasave impact following RPC processing

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841609#comment-16841609
 ] 

Hadoop QA commented on HDFS-14497:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 13s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 476 unchanged - 
0 fixed = 477 total (was 476) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 289 unchanged - 1 fixed = 293 total (was 290) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14497 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968917/HDFS-14497.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cfcecf8b8524 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 03ea8ea |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26797/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-16 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HDFS-14447:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, 
> HDFS-14447-HDFS-13891.08.patch, HDFS-14447-HDFS-13891.09.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-16 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841595#comment-16841595
 ] 

Giovanni Matteo Fumarola commented on HDFS-14447:
-

Thanks [~shenyinjie] for working on this and [~lukmajercak] and [~elgoiri] for 
the review.
Committed to the branch.

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, 
> HDFS-14447-HDFS-13891.08.patch, HDFS-14447-HDFS-13891.09.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1517) AllocateBlock call fails with ContainerNotFoundException

2019-05-16 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841586#comment-16841586
 ] 

Jitendra Nath Pandey edited comment on HDDS-1517 at 5/16/19 5:56 PM:
-

The patch moves addition of container to pipelineStateMap after its addition to 
container cache. Now a thread may first find the container in the cache but not 
in pipelineStateMap. How is the race condition addressed? Do we guarantee that 
a thread will always look in a certain order.


was (Author: jnp):
The patch moves addition of container to pipelineStateMap after its addition to 
container cache. Now a thread may first find the container in the cache but not 
in pipelineStateMap. How is the race condition addressed? Do we guarantee that 
a thread will never look in pipelineStateMap before it looks in container cache?

> AllocateBlock call fails with ContainerNotFoundException
> 
>
> Key: HDDS-1517
> URL: https://issues.apache.org/jira/browse/HDDS-1517
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1517.000.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In allocateContainer call,  the container is first added to pipelineStateMap 
> and then added to container cache. If two allocate blocks execute 
> concurrently, it might happen that one find the container to exist in the 
> pipelineStateMap but the container is yet to be updated in the container 
> cache, hence failing with CONTAINER_NOT_FOUND exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1517) AllocateBlock call fails with ContainerNotFoundException

2019-05-16 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841586#comment-16841586
 ] 

Jitendra Nath Pandey commented on HDDS-1517:


The patch moves addition of container to pipelineStateMap after its addition to 
container cache. Now a thread may first find the container in the cache but not 
in pipelineStateMap. How is the race condition addressed? Do we guarantee that 
a thread will never look in pipelineStateMap before it looks in container cache?

> AllocateBlock call fails with ContainerNotFoundException
> 
>
> Key: HDDS-1517
> URL: https://issues.apache.org/jira/browse/HDDS-1517
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1517.000.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In allocateContainer call,  the container is first added to pipelineStateMap 
> and then added to container cache. If two allocate blocks execute 
> concurrently, it might happen that one find the container to exist in the 
> pipelineStateMap but the container is yet to be updated in the container 
> cache, hence failing with CONTAINER_NOT_FOUND exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841558#comment-16841558
 ] 

Íñigo Goiri commented on HDFS-14447:


+1 on  [^HDFS-14447-HDFS-13891.09.patch].

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, 
> HDFS-14447-HDFS-13891.08.patch, HDFS-14447-HDFS-13891.09.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-05-16 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841554#comment-16841554
 ] 

Lukas Majercak commented on HDFS-14447:
---

patch09 lgtm

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, 
> HDFS-14447-HDFS-13891.08.patch, HDFS-14447-HDFS-13891.09.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >