[jira] [Commented] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2019-05-02 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832240#comment-16832240
 ] 

Dinesh Chitlangia commented on HDFS-9059:
-

This is implemented in 3.1.0 and above. Do we still need to backport this to 
older versions ?

> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2019-05-02 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9059 started by Dinesh Chitlangia.
---
> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2019-05-02 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDFS-9059:
---

Assignee: Dinesh Chitlangia

> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java

2019-05-02 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832172#comment-16832172
 ] 

Dinesh Chitlangia commented on HDFS-14438:
--

[~bianqi] Thanks for working on this. LGTM +1 (non-binding)

> Fix typo in HDFS for OfflineEditsVisitorFactory.java
> 
>
> Key: HDFS-14438
> URL: https://issues.apache.org/jira/browse/HDFS-14438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: bianqi
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-14438.1.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68
> proccesor -> processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832133#comment-16832133
 ] 

Hadoop QA commented on HDFS-14440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 
11s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14440 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967718/HDFS-14440-HDFS-13891-06.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f41c703881aa 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 893c708 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26742/testReport/ |
| Max. process+thread count | 1003 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26742/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically 

[jira] [Updated] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-05-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14454:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

Committed.
Thanx [~elgoiri] for the contribution.

> RBF: getContentSummary() should allow non-existing folders
> --
>
> Key: HDFS-14454
> URL: https://issues.apache.org/jira/browse/HDFS-14454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14454-HDFS-13891.000.patch, 
> HDFS-14454-HDFS-13891.001.patch, HDFS-14454-HDFS-13891.002.patch, 
> HDFS-14454-HDFS-13891.003.patch, HDFS-14454-HDFS-13891.004.patch, 
> HDFS-14454-HDFS-13891.005.patch, HDFS-14454-HDFS-13891.006.patch
>
>
> We have a mount point with HASH_ALL and one of the subclusters does not 
> contain the folder.
> In this case, getContentSummary() returns FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1480) Ip address should not be a part of the DatanodeID since it can change

2019-05-02 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-1480:
---

Assignee: Aravindan Vijayan

> Ip address should not be a part of the DatanodeID since it can change
> -
>
> Key: HDDS-1480
> URL: https://issues.apache.org/jira/browse/HDDS-1480
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie
>
> The DatanodeID identified by the DatanodeDetails object is persisted to disk 
> and read back on restart. The following fields are currently being serialized 
> and we should omit ip address from this set.
> {quote}
> UUID uuid;
> String ipAddress;
> String hostName;
> List ports;
> String certSerialId;
> {quote}
> cc: [~arpaga] this is follow-up from HDDS-1473



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832072#comment-16832072
 ] 

Ayush Saxena commented on HDFS-14440:
-

Updated v06 with the said change.
Pls review!!!

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch, HDFS-14440-HDFS-13891-05.patch, 
> HDFS-14440-HDFS-13891-06.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1448) RatisPipelineProvider should only consider open pipeline while excluding dn for pipeline allocation

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1448?focusedWorklogId=236629=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236629
 ]

ASF GitHub Bot logged work on HDDS-1448:


Author: ASF GitHub Bot
Created on: 02/May/19 23:09
Start Date: 02/May/19 23:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #786: HDDS-1448 : 
RatisPipelineProvider should only consider open pipeline …
URL: https://github.com/apache/hadoop/pull/786#issuecomment-488863930
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | +1 | mvninstall | 452 | trunk passed |
   | +1 | compile | 209 | trunk passed |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 819 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 127 | trunk passed |
   | 0 | spotbugs | 251 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 449 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 424 | the patch passed |
   | +1 | compile | 210 | the patch passed |
   | +1 | javac | 210 | the patch passed |
   | +1 | checkstyle | 58 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 645 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 118 | the patch passed |
   | +1 | findbugs | 433 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 128 | hadoop-hdds in the patch failed. |
   | -1 | unit | 834 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 5299 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineUtils |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.web.client.TestVolume |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-786/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/786 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5a464a4f4007 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6b7609 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-786/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-786/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-786/3/testReport/ |
   | Max. process+thread count | 3220 (vs. ulimit of 5500) |
   

[jira] [Updated] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14440:

Attachment: HDFS-14440-HDFS-13891-06.patch

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch, HDFS-14440-HDFS-13891-05.patch, 
> HDFS-14440-HDFS-13891-06.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14426) RBF: Add delegation token total count as one of the federation metrics

2019-05-02 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832067#comment-16832067
 ] 

Fengnan Li commented on HDFS-14426:
---

[~ayushtkn] [~elgoiri] Thanks for the update! I can confirm that by git pull I 
can get HDFS-14374 locally. I will rebase the patch and re-upload soon.

> RBF: Add delegation token total count as one of the federation metrics
> --
>
> Key: HDFS-14426
> URL: https://issues.apache.org/jira/browse/HDFS-14426
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14426-HDFS-13891.001.patch, HDFS-14426.001.patch
>
>
> Currently router doesn't report the total number of current valid delegation 
> tokens it has, but this piece of information is useful for monitoring and 
> understanding the real time situation of tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832060#comment-16832060
 ] 

Eric Yang commented on HDDS-1458:
-

[~ebadger] On the mailing thread, [~ste...@apache.org] suggested to use "dist" 
profile.  Everyone agreed on optional profile, hence I created docker profile.  
My mistake was not using dist profile.  There was no one disputed against using 
dist profile and I filed YARN-9523 to correct that mistake.  I think 
reiteration to make docker profile as dist profile is a good correction toward 
the agreed end goal.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832052#comment-16832052
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] {quote}starting docker based pseudo cluster from a (released or dev) 
distribution. In this case the mount is not a problem. I think here we should 
use mounting to ensure we have exactly the same bits inside and outside. I 
can't see any problem here.{quote}

Dev can be tested with tarball.  There is no need to involve docker until the 
finished goods are ready for transport. Mount binaries outside of container is 
making it more difficult to transport goods, hence defeats the purpose to use 
container technology in the first place.

{quote}The second use case is to provide independent, portable docker-compose 
files. We have this one even now:{quote}

Docker image is used as a configuration file transport mechanism.  I think it's 
convoluted process.  There are more efficient way to transport config files 
IMHO.  Those instructions requires to run docker rm command to destroy the 
instances as well.  

{quote}5. The release tar file also contains the compose directory. I think 
it's a very important part. With mounting the distribution package from the 
docker-compose files we can provide the easiest UX to start a pseudo cluster 
without any additional image creation.{quote}

Wouldn't it be better that we just give UX team yaml file, and let their 
docker-compose fetch self contained docker images from dockerhub without 
downloading the tarball?  It seems that we are using the tools in most 
inappropriate way of it's design that created more problems.  There is no 
guarantee for UX to get a successful run because tarballs and container images 
may change overtime.  There is no crc check or any transaction versioning 
mechanism to ensure downloaded tarball can work with x version of docker image. 
 I am quite puzzle on the route chosen to mount binaries, it is like ordering a 
container for moving between houses and put a key in container while leaving 
all furniture outside of container.  It works, but completely impractical.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1448) RatisPipelineProvider should only consider open pipeline while excluding dn for pipeline allocation

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1448?focusedWorklogId=236586=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236586
 ]

ASF GitHub Bot logged work on HDDS-1448:


Author: ASF GitHub Bot
Created on: 02/May/19 22:09
Start Date: 02/May/19 22:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #786: HDDS-1448 : 
RatisPipelineProvider should only consider open pipeline …
URL: https://github.com/apache/hadoop/pull/786#issuecomment-488850232
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 410 | trunk passed |
   | +1 | compile | 196 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 778 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 118 | trunk passed |
   | 0 | spotbugs | 310 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 488 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 381 | the patch passed |
   | +1 | compile | 194 | the patch passed |
   | +1 | javac | 194 | the patch passed |
   | -0 | checkstyle | 24 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 614 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 123 | the patch passed |
   | +1 | findbugs | 432 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 129 | hadoop-hdds in the patch failed. |
   | -1 | unit | 688 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 4940 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.om.TestOzoneManagerRestInterface |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-786/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/786 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 40b3a335432a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7a3188d |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-786/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-786/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 

[jira] [Commented] (HDFS-14437) Exception happened when rollEditLog expects empty EditsDoubleBuffer.bufCurrent but not

2019-05-02 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832024#comment-16832024
 ] 

Daryn Sharp commented on HDFS-14437:


Unless I'm overlooking new details, I explained on the cited HDFS-10943 that 
this is not an edit log problem.
{quote}The quorum output stream will always crash the NN if edits are rolled 
while a node is down. The edit roll will sync, flush, close. As we can see, 
closing a logger will fail if there are unflushed bytes. The issue is 
QuorumOutputStream#flush succeeds with only a simple majority of loggers. Those 
that failed cause close to fail due to unflushed bytes. Someone familiar with 
QJM (we don't use it) will need to decide if it's safe for the quorum's flush 
to clear the buffers of failed loggers.
{quote}
QJM uses async streams. If a stream fails to flush it will buffer the data.  If 
those unflushed streams are closed, ex. during a log roll, they abort due to 
the unflushed data.  The edit log cannot fix this bug with the underlying 
quorum streams.

> Exception happened when   rollEditLog expects empty 
> EditsDoubleBuffer.bufCurrent  but not
> -
>
> Key: HDFS-14437
> URL: https://issues.apache.org/jira/browse/HDFS-14437
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode, qjm
>Reporter: angerszhu
>Priority: Major
>
> For the problem mentioned in https://issues.apache.org/jira/browse/HDFS-10943 
> , I have sort the process of write and flush EditLog and some important 
> function, I found the in the class  FSEditLog class, the close() function 
> will call such process like below:
>  
> {code:java}
> waitForSyncToFinish();
> endCurrentLogSegment(true);{code}
> since we have gain the object lock in the function close(), so when  
> waitForSyncToFish() method return, it mean all logSync job has done and all 
> data in bufReady has been flushed out, and since current thread has the lock 
> of this object, when call endCurrentLogSegment(), no other thread will gain 
> the lock so they can't write new editlog into currentBuf.
> But when we don't call waitForSyncToFish() before endCurrentLogSegment(), 
> there may be some autoScheduled logSync()'s flush process is doing, since 
> this process don't need
> synchronization since it has mention in the comment of logSync() method :
>  
> {code:java}
> /**
>  * Sync all modifications done by this thread.
>  *
>  * The internal concurrency design of this class is as follows:
>  *   - Log items are written synchronized into an in-memory buffer,
>  * and each assigned a transaction ID.
>  *   - When a thread (client) would like to sync all of its edits, logSync()
>  * uses a ThreadLocal transaction ID to determine what edit number must
>  * be synced to.
>  *   - The isSyncRunning volatile boolean tracks whether a sync is currently
>  * under progress.
>  *
>  * The data is double-buffered within each edit log implementation so that
>  * in-memory writing can occur in parallel with the on-disk writing.
>  *
>  * Each sync occurs in three steps:
>  *   1. synchronized, it swaps the double buffer and sets the isSyncRunning
>  *  flag.
>  *   2. unsynchronized, it flushes the data to storage
>  *   3. synchronized, it resets the flag and notifies anyone waiting on the
>  *  sync.
>  *
>  * The lack of synchronization on step 2 allows other threads to continue
>  * to write into the memory buffer while the sync is in progress.
>  * Because this step is unsynchronized, actions that need to avoid
>  * concurrency with sync() should be synchronized and also call
>  * waitForSyncToFinish() before assuming they are running alone.
>  */
> public void logSync() {
>   long syncStart = 0;
>   // Fetch the transactionId of this thread. 
>   long mytxid = myTransactionId.get().txid;
>   
>   boolean sync = false;
>   try {
> EditLogOutputStream logStream = null;
> synchronized (this) {
>   try {
> printStatistics(false);
> // if somebody is already syncing, then wait
> while (mytxid > synctxid && isSyncRunning) {
>   try {
> wait(1000);
>   } catch (InterruptedException ie) {
>   }
> }
> //
> // If this transaction was already flushed, then nothing to do
> //
> if (mytxid <= synctxid) {
>   numTransactionsBatchedInSync++;
>   if (metrics != null) {
> // Metrics is non-null only when used inside name node
> metrics.incrTransactionsBatchedInSync();
>   }
>   return;
> }
>
> // now, this thread will do the sync
> syncStart = txid;
> isSyncRunning = true;
> sync = true;
> // swap buffers
> try {
>   if 

[jira] [Commented] (HDFS-14453) Improve Bad Sequence Number Error Message

2019-05-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832013#comment-16832013
 ] 

Hudson commented on HDFS-14453:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16498 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16498/])
HDFS-14453. Improve Bad Sequence Number Error Message. Contributed by (weichiu: 
rev d6b7609c9674c3d0175868d7190293f1925d779b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> Improve Bad Sequence Number Error Message
> -
>
> Key: HDFS-14453
> URL: https://issues.apache.org/jira/browse/HDFS-14453
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: noob
> Fix For: 3.3.0
>
> Attachments: HDFS-14453.001.patch
>
>
> {code:java|title=DataStreamer.java}
>   if (one.getSeqno() != seqno) {
> throw new IOException("ResponseProcessor: Expecting seqno" +
> " for block " + block +
> one.getSeqno() + " but received " + seqno);
>   }
> {code}
> https://github.com/apache/hadoop/blob/685cb83e4c3f433c5147e35217ce79ea520a0da5/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1154-L1158
> There is no space between the {{block}} and the {{one.getSeqno()}}.  Please 
> change to:
> {code:java}
>   if (one.getSeqno() != seqno) {
> throw new IOException("ResponseProcessor: Expecting seqno " + 
> one.getSeqno()
> + " for block " + block + " but received " + seqno);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14453) Improve Bad Sequence Number Error Message

2019-05-02 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832004#comment-16832004
 ] 

Shweta commented on HDFS-14453:
---

Thanks for the commit [~jojochuang]

> Improve Bad Sequence Number Error Message
> -
>
> Key: HDFS-14453
> URL: https://issues.apache.org/jira/browse/HDFS-14453
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: noob
> Fix For: 3.3.0
>
> Attachments: HDFS-14453.001.patch
>
>
> {code:java|title=DataStreamer.java}
>   if (one.getSeqno() != seqno) {
> throw new IOException("ResponseProcessor: Expecting seqno" +
> " for block " + block +
> one.getSeqno() + " but received " + seqno);
>   }
> {code}
> https://github.com/apache/hadoop/blob/685cb83e4c3f433c5147e35217ce79ea520a0da5/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1154-L1158
> There is no space between the {{block}} and the {{one.getSeqno()}}.  Please 
> change to:
> {code:java}
>   if (one.getSeqno() != seqno) {
> throw new IOException("ResponseProcessor: Expecting seqno " + 
> one.getSeqno()
> + " for block " + block + " but received " + seqno);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13532) RBF: Adding security

2019-05-02 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832003#comment-16832003
 ] 

CR Hota commented on HDFS-13532:


[~hexiaoqiao] [~elgoiri] 

Sharing some stats on zk testing. We happened to run some more tests on 
zookeeper token store lately. It was easy to store approximately 2 million 
tokens. We did not do any tests beyond. While configuring zk, specially for 
client side bumping up jute.maxbuffer is important. Size should be such that 
zookeeper client can stream all tokens. This size is dependent on how many 
tokens are being created and will be streamed.

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ 
> Security delegation token thoughts_updated.pdf, RBF _ Security delegation 
> token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, RBF_ 
> Security delegation token thoughts_updated_3.pdf, Security_for_Router-based 
> Federation_design_doc.pdf
>
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14451) Incorrect header or version mismatch log message

2019-05-02 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14451:
--
Attachment: HDFS-14451.002.patch

> Incorrect header or version mismatch log message
> 
>
> Key: HDFS-14451
> URL: https://issues.apache.org/jira/browse/HDFS-14451
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: noob
> Attachments: HDFS-14451.001.patch, HDFS-14451.002.patch
>
>
> {code:java|title=Server.java}
>   if (!RpcConstants.HEADER.equals(dataLengthBuffer)
>   || version != CURRENT_VERSION) {
> //Warning is ok since this is not supposed to happen.
> LOG.warn("Incorrect header or version mismatch from " + 
>  hostAddress + ":" + remotePort +
>  " got version " + version + 
>  " expected version " + CURRENT_VERSION);
> setupBadVersionResponse(version);
> return -1;
> {code}
> This message should include the value of {{RpcConstants.HEADER}} and 
> {{dataLengthBuffer}} in addition to just the version information or else that 
> data is lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14451) Incorrect header or version mismatch log message

2019-05-02 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832000#comment-16832000
 ] 

Shweta commented on HDFS-14451:
---

Thanks for the review [~jojochuang]. 
Sure, I didn't previous add the {{setupBadVersionResponse(version);} for the 
first case since it doesn't have any Header information sent but as you 
mentioned in our offline sync that one reason for a difference in header length 
can also be due to Client and Server having different versions.

Added the call to {{setupBadVersionResponse(version);} in patch v002. Please 
review. Thanks.

> Incorrect header or version mismatch log message
> 
>
> Key: HDFS-14451
> URL: https://issues.apache.org/jira/browse/HDFS-14451
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: noob
> Attachments: HDFS-14451.001.patch
>
>
> {code:java|title=Server.java}
>   if (!RpcConstants.HEADER.equals(dataLengthBuffer)
>   || version != CURRENT_VERSION) {
> //Warning is ok since this is not supposed to happen.
> LOG.warn("Incorrect header or version mismatch from " + 
>  hostAddress + ":" + remotePort +
>  " got version " + version + 
>  " expected version " + CURRENT_VERSION);
> setupBadVersionResponse(version);
> return -1;
> {code}
> This message should include the value of {{RpcConstants.HEADER}} and 
> {{dataLengthBuffer}} in addition to just the version information or else that 
> data is lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14451) Incorrect header or version mismatch log message

2019-05-02 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832000#comment-16832000
 ] 

Shweta edited comment on HDFS-14451 at 5/2/19 9:28 PM:
---

Thanks for the review [~jojochuang]. 
Sure, I didn't previous add the {{setupBadVersionResponse(version);} for the 
first case since it doesn't have any Header information sent in the response 
but as you mentioned in our offline sync that one reason for a difference in 
header length can also be due to Client and Server having different versions.

Added the call to {{setupBadVersionResponse(version);} in patch v002. Please 
review. Thanks.


was (Author: shwetayakkali):
Thanks for the review [~jojochuang]. 
Sure, I didn't previous add the {{setupBadVersionResponse(version);} for the 
first case since it doesn't have any Header information sent but as you 
mentioned in our offline sync that one reason for a difference in header length 
can also be due to Client and Server having different versions.

Added the call to {{setupBadVersionResponse(version);} in patch v002. Please 
review. Thanks.

> Incorrect header or version mismatch log message
> 
>
> Key: HDFS-14451
> URL: https://issues.apache.org/jira/browse/HDFS-14451
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: noob
> Attachments: HDFS-14451.001.patch
>
>
> {code:java|title=Server.java}
>   if (!RpcConstants.HEADER.equals(dataLengthBuffer)
>   || version != CURRENT_VERSION) {
> //Warning is ok since this is not supposed to happen.
> LOG.warn("Incorrect header or version mismatch from " + 
>  hostAddress + ":" + remotePort +
>  " got version " + version + 
>  " expected version " + CURRENT_VERSION);
> setupBadVersionResponse(version);
> return -1;
> {code}
> This message should include the value of {{RpcConstants.HEADER}} and 
> {{dataLengthBuffer}} in addition to just the version information or else that 
> data is lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14453) Improve Bad Sequence Number Error Message

2019-05-02 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14453:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks for the patch and review!

> Improve Bad Sequence Number Error Message
> -
>
> Key: HDFS-14453
> URL: https://issues.apache.org/jira/browse/HDFS-14453
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Shweta
>Priority: Minor
>  Labels: noob
> Fix For: 3.3.0
>
> Attachments: HDFS-14453.001.patch
>
>
> {code:java|title=DataStreamer.java}
>   if (one.getSeqno() != seqno) {
> throw new IOException("ResponseProcessor: Expecting seqno" +
> " for block " + block +
> one.getSeqno() + " but received " + seqno);
>   }
> {code}
> https://github.com/apache/hadoop/blob/685cb83e4c3f433c5147e35217ce79ea520a0da5/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1154-L1158
> There is no space between the {{block}} and the {{one.getSeqno()}}.  Please 
> change to:
> {code:java}
>   if (one.getSeqno() != seqno) {
> throw new IOException("ResponseProcessor: Expecting seqno " + 
> one.getSeqno()
> + " for block " + block + " but received " + seqno);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1451) SCMBlockManager findPipeline and createPipeline are not lock protected

2019-05-02 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831983#comment-16831983
 ] 

Aravindan Vijayan commented on HDDS-1451:
-

[~msingh] The getPipelines() and createPipeline() already seem to have a lock 
in their implementation. However, the problem describe involves a race 
condition between the call to getPipelines and createPipelines in 
BlockManagerImpl#allocateBlock. Is my understanding correct? Do you see any 
other approach to this? 

> SCMBlockManager findPipeline and createPipeline are not lock protected
> --
>
> Key: HDDS-1451
> URL: https://issues.apache.org/jira/browse/HDDS-1451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> SCM BlockManager may try to allocate pipelines in the cases when it is not 
> needed. This happens because BlockManagerImpl#allocateBlock is not lock 
> protected, so multiple pipelines can be allocated from it. One of the 
> pipeline allocation can fail even when one of the existing pipeline already 
> exists.
> {code}
> 2019-04-22 22:34:14,336 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 6f4bb2d7-d660-4f9f-bc06-72b10f9a738e, Nodes: 76e1a493-fd55-4d67-9f5
> 5-c04fd6bd3a33{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}2b9850b2-aed3-4a40-91b5-2447dc5246bf{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}12248721-ea6a-453f-8dad-fc7fbe692f
> d2{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,386 INFO  impl.RoleInfo 
> (RoleInfo.java:shutdownLeaderElection(134)) - 
> e17b7852-4691-40c7-8791-ad0b0da5201f: shutdown LeaderElection
> 2019-04-22 22:34:14,388 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 552e28f3-98d9-41f3-86e0-c1b9494838a5, Nodes: e17b7852-4691-40c7-879
> 1-ad0b0da5201f{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}fd365bac-e26e-4b11-afd8-9d08cd1b0521{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}9583a007-7f02-4074-9e26-19bc18e29e
> c5{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,388 INFO  impl.RoleInfo (RoleInfo.java:updateAndGet(143)) 
> - e17b7852-4691-40c7-8791-ad0b0da5201f: start FollowerState
> 2019-04-22 22:34:14,388 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 5383151b-d625-4362-a7dd-c0d353acaf76, Nodes: 80f16ad6-3879-4a64-a3c
> 7-7719813cc139{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}082ce481-7fb0-4f88-ac21-82609290a6a2{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}dd5f5a70-0217-4577-b7a2-c42aa139d1
> 8a{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,389 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> be4854e5-7933-4caa-b32e-f482cf500247, Nodes: 6e2356f1-479d-498b-876
> a-1c90623c498b{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: 
> null}8ac46d94-9975-4eea-9448-2618c69d7bf3{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}a3ed36a1-44ca-47b2-b9b3-5aeef04595
> 18{ip: 192.168.0.104, host: 192.168.0.104, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,390 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> 21e368e2-f82a-4c61-9cc3-06e8de22ea6b, Nodes: 
> 82632040-5754-4122-b187-331879586842{ip: 192.168.0.104, host: 192.168.0.104, 
> certSerialId: null}923c8537-b869-4085-adcb-0a9accdcd089{ip: 192.168.0.104, 
> host: 192.168.0.104, certSerialId: 
> null}c6d790bf-e3a6-4064-acb5-f74796cd38a9{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}, Type:RATIS, Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,390 INFO  pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$create$1(103)) -  pipeline Pipeline[ Id: 
> cccbc2ed-e0e2-4578-a8a2-94f4b645be52, Nodes: 
> 91ae6848-a778-43be-a4a1-5855f7adc0d8{ip: 192.168.0.104, host: 192.168.0.104, 
> certSerialId: null}8f330a03-40e2-4bd1-9b43-5e05b13d89f0{ip: 192.168.0.104, 
> host: 192.168.0.104, certSerialId: 
> null}4f3070dc-650b-48d7-87b5-d2076104e7b4{ip: 192.168.0.104, host: 
> 192.168.0.104, certSerialId: null}, Type:RATIS, Factor:THREE, State:OPEN]
> 2019-04-22 22:34:14,392 ERROR block.BlockManagerImpl 
> 

[jira] [Commented] (HDDS-1473) DataNode ID file should be human readable

2019-05-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831982#comment-16831982
 ] 

Hudson commented on HDDS-1473:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16497 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16497/])
HDDS-1473. DataNode ID file should be human readable. (#781) (koneru.hanisha: 
rev 1df679985be187ef773daae37816ddf1df2e411a)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/DatanodeIdYaml.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java


> DataNode ID file should be human readable
> -
>
> Key: HDDS-1473
> URL: https://issues.apache.org/jira/browse/HDDS-1473
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The DataNode ID file should be human readable to make debugging easier. We 
> should use YAML as we have used it elsewhere for meta files.
> Currently it is a binary file whose contents are protobuf encoded. This is a 
> tiny file read once on startup, so performance is not a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1473) DataNode ID file should be human readable

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1473?focusedWorklogId=236557=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236557
 ]

ASF GitHub Bot logged work on HDDS-1473:


Author: ASF GitHub Bot
Created on: 02/May/19 20:59
Start Date: 02/May/19 20:59
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #781: 
HDDS-1473. DataNode ID file should be human readable.
URL: https://github.com/apache/hadoop/pull/781
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236557)
Time Spent: 3h 10m  (was: 3h)

> DataNode ID file should be human readable
> -
>
> Key: HDDS-1473
> URL: https://issues.apache.org/jira/browse/HDDS-1473
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The DataNode ID file should be human readable to make debugging easier. We 
> should use YAML as we have used it elsewhere for meta files.
> Currently it is a binary file whose contents are protobuf encoded. This is a 
> tiny file read once on startup, so performance is not a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1473) DataNode ID file should be human readable

2019-05-02 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-1473:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> DataNode ID file should be human readable
> -
>
> Key: HDDS-1473
> URL: https://issues.apache.org/jira/browse/HDDS-1473
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The DataNode ID file should be human readable to make debugging easier. We 
> should use YAML as we have used it elsewhere for meta files.
> Currently it is a binary file whose contents are protobuf encoded. This is a 
> tiny file read once on startup, so performance is not a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1448) RatisPipelineProvider should only consider open pipeline while excluding dn for pipeline allocation

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1448?focusedWorklogId=236556=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236556
 ]

ASF GitHub Bot logged work on HDDS-1448:


Author: ASF GitHub Bot
Created on: 02/May/19 20:59
Start Date: 02/May/19 20:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #786: HDDS-1448 : 
RatisPipelineProvider should only consider open pipeline …
URL: https://github.com/apache/hadoop/pull/786#issuecomment-488828795
 
 
   We also have to add the pipelines in ALLOCATED state to the set of DNs being 
excluded. The reason is that the SCMPipelineManager initializes the set of 
pipelines from the pipeline store (RocksDB) in the ALLOCATED state. Hence, when 
the background pipeline creator gets the current pipeline list for creating 
pipelines pro-actively, if we fail to exclude these "ALLOCATED" pipelines which 
may soon transition to "OPEN", we can end up creating duplicate pipelines in 
the same DN ring. 
   
   If the ALLOCATED state becomes OPEN, then it remains a NO-Op and if it 
transitions to CLOSED, the background pipeline creator will automatically start 
using the DNs in the next creation. 
   
   cc @mukul1987 / @nandakumar131 / @arp7 Can you comment if the above makes 
sense? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236556)
Time Spent: 1h  (was: 50m)

> RatisPipelineProvider should only consider open pipeline while excluding dn 
> for pipeline allocation
> ---
>
> Key: HDDS-1448
> URL: https://issues.apache.org/jira/browse/HDDS-1448
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While allocation pipelines, Ratis pipeline provider considers all the 
> pipelines irrespective of the state of the pipeline. This can lead to case 
> where all the datanodes are up but the pipelines are in closing state in SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1448) RatisPipelineProvider should only consider open pipeline while excluding dn for pipeline allocation

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1448?focusedWorklogId=236553=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236553
 ]

ASF GitHub Bot logged work on HDDS-1448:


Author: ASF GitHub Bot
Created on: 02/May/19 20:55
Start Date: 02/May/19 20:55
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #786: HDDS-1448 : 
RatisPipelineProvider should only consider open pipeline …
URL: https://github.com/apache/hadoop/pull/786#issuecomment-488828795
 
 
   We also have to add the pipelines in ALLOCATED state to the set of DNs being 
excluded. The reason is that when the SCMPipelineManager initializes the set of 
pipelines from the pipeline store (RocksDB) in the ALLOCATED state. Hence, when 
the background pipeline creator gets the current pipeline list for creating 
pipelines pro-actively, if we fail to exclude these "ALLOCATED" pipelines which 
may soon transition to "OPEN", we can end up creating duplicate pipelines in 
the same DN ring. 
   
   If the ALLOCATED state becomes OPEN, then it remains a NO-Op and if it 
transitions to CLOSED, the background pipeline will automatically start using 
the DNs in the next creation. 
   
   cc @mukul1987 / @nandakumar131 / @arp7 Can you comment if the above makes 
sense? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236553)
Time Spent: 40m  (was: 0.5h)

> RatisPipelineProvider should only consider open pipeline while excluding dn 
> for pipeline allocation
> ---
>
> Key: HDDS-1448
> URL: https://issues.apache.org/jira/browse/HDDS-1448
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> While allocation pipelines, Ratis pipeline provider considers all the 
> pipelines irrespective of the state of the pipeline. This can lead to case 
> where all the datanodes are up but the pipelines are in closing state in SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1448) RatisPipelineProvider should only consider open pipeline while excluding dn for pipeline allocation

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1448?focusedWorklogId=236555=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236555
 ]

ASF GitHub Bot logged work on HDDS-1448:


Author: ASF GitHub Bot
Created on: 02/May/19 20:57
Start Date: 02/May/19 20:57
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #786: HDDS-1448 : 
RatisPipelineProvider should only consider open pipeline …
URL: https://github.com/apache/hadoop/pull/786#issuecomment-488828795
 
 
   We also have to add the pipelines in ALLOCATED state to the set of DNs being 
excluded. The reason is that the SCMPipelineManager initializes the set of 
pipelines from the pipeline store (RocksDB) in the ALLOCATED state. Hence, when 
the background pipeline creator gets the current pipeline list for creating 
pipelines pro-actively, if we fail to exclude these "ALLOCATED" pipelines which 
may soon transition to "OPEN", we can end up creating duplicate pipelines in 
the same DN ring. 
   
   If the ALLOCATED state becomes OPEN, then it remains a NO-Op and if it 
transitions to CLOSED, the background pipeline will automatically start using 
the DNs in the next creation. 
   
   cc @mukul1987 / @nandakumar131 / @arp7 Can you comment if the above makes 
sense? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236555)
Time Spent: 50m  (was: 40m)

> RatisPipelineProvider should only consider open pipeline while excluding dn 
> for pipeline allocation
> ---
>
> Key: HDDS-1448
> URL: https://issues.apache.org/jira/browse/HDDS-1448
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> While allocation pipelines, Ratis pipeline provider considers all the 
> pipelines irrespective of the state of the pipeline. This can lead to case 
> where all the datanodes are up but the pipelines are in closing state in SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1473) DataNode ID file should be human readable

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1473?focusedWorklogId=236540=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236540
 ]

ASF GitHub Bot logged work on HDDS-1473:


Author: ASF GitHub Bot
Created on: 02/May/19 20:35
Start Date: 02/May/19 20:35
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #781: HDDS-1473. 
DataNode ID file should be human readable.
URL: https://github.com/apache/hadoop/pull/781#issuecomment-488822994
 
 
   The test failures are unrelated and pass locally. I will merge this PR. 
Thank you @swagle for working on this. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236540)
Time Spent: 3h  (was: 2h 50m)

> DataNode ID file should be human readable
> -
>
> Key: HDDS-1473
> URL: https://issues.apache.org/jira/browse/HDDS-1473
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The DataNode ID file should be human readable to make debugging easier. We 
> should use YAML as we have used it elsewhere for meta files.
> Currently it is a binary file whose contents are protobuf encoded. This is a 
> tiny file read once on startup, so performance is not a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=236538=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236538
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 02/May/19 20:25
Start Date: 02/May/19 20:25
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-488819215
 
 
   The failing acceptance tests and unit tests are not related to this change. 
The trunk is broken and these tests need to be fixed in trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236538)
Time Spent: 1h 10m  (was: 1h)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1487) Bootstrap React framework for Recon UI

2019-05-02 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1487 started by Vivek Ratnavel Subramanian.

> Bootstrap React framework for Recon UI
> --
>
> Key: HDDS-1487
> URL: https://issues.apache.org/jira/browse/HDDS-1487
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Bootstrap React with Typescript, Ant, LESS and other necessary libraries for 
> Recon UI. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1487) Bootstrap React framework for Recon UI

2019-05-02 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1487:


 Summary: Bootstrap React framework for Recon UI
 Key: HDDS-1487
 URL: https://issues.apache.org/jira/browse/HDDS-1487
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Affects Versions: 0.4.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Bootstrap React with Typescript, Ant, LESS and other necessary libraries for 
Recon UI. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1485) Ozone writes fail when single threaded client writes 100MB files repeatedly.

2019-05-02 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1485:
---
Priority: Blocker  (was: Major)

> Ozone writes fail when single threaded client writes 100MB files repeatedly. 
> -
>
> Key: HDDS-1485
> URL: https://issues.apache.org/jira/browse/HDDS-1485
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Priority: Blocker
>
> *Environment*
> 26 node physical cluster.
> All Datanodes are up and running.
> Client attempting to write 1600 x 100MB files using the FsStress utility 
> (https://github.com/arp7/FsPerfTest) fails with the following error. 
> {code}
> 19/05/02 09:58:49 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 424 does not exist
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:539)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:616)
> at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It looks like a corruption in the container metadata. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1483) Fix getMultipartKey javadoc

2019-05-02 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831901#comment-16831901
 ] 

Dinesh Chitlangia commented on HDDS-1483:
-

Thanks [~bharatviswa] for filing jira, reviewing and committing the fix.

> Fix getMultipartKey javadoc
> ---
>
> Key: HDDS-1483
> URL: https://issues.apache.org/jira/browse/HDDS-1483
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> /**
> <<< HEAD
>  * Returns the DB key name of a multipart upload key in OM metadata store.
>  *
>  * @param volume - volume name
>  * @param bucket - bucket name
>  * @param key - key name
>  * @param uploadId - the upload id for this key
>  * @return bytes of DB key.
>  */
>  String getMultipartKey(String volume, String bucket, String key, String
>  uploadId);{code}
>  
> Remove *<<< HEAD* unwanted change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1475) Fix OzoneContainer start method

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1475?focusedWorklogId=236507=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236507
 ]

ASF GitHub Bot logged work on HDDS-1475:


Author: ASF GitHub Bot
Created on: 02/May/19 19:26
Start Date: 02/May/19 19:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #788: HDDS-1475 : Fix 
OzoneContainer start method.
URL: https://github.com/apache/hadoop/pull/788#issuecomment-488801033
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 424 | trunk passed |
   | +1 | compile | 202 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 818 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 226 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 422 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 384 | the patch passed |
   | +1 | compile | 203 | the patch passed |
   | +1 | javac | 203 | the patch passed |
   | +1 | checkstyle | 59 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 660 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 127 | the patch passed |
   | +1 | findbugs | 437 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 145 | hadoop-hdds in the patch failed. |
   | -1 | unit | 682 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 5046 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.om.TestOzoneManagerRestInterface |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineUtils |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/788 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3862905a2007 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 865c328 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-788/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 

[jira] [Commented] (HDFS-14463) Add Log Level link under NameNode and DataNode Web UI Utilities dropdown

2019-05-02 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831866#comment-16831866
 ] 

Siyao Meng commented on HDFS-14463:
---

Thanks [~jojochuang] for committing this!

> Add Log Level link under NameNode and DataNode Web UI Utilities dropdown
> 
>
> Key: HDFS-14463
> URL: https://issues.apache.org/jira/browse/HDFS-14463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14463.001.patch, dn_postpatch.png, nn_postpatch.png
>
>
> Add Log Level link under NameNode and DataNode Web UI Utilities dropdown:
>  !nn_postpatch.png! 
>  !dn_postpatch.png! 
> CC [~arpitagarwal] [~jojochuang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=236484=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236484
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 02/May/19 18:46
Start Date: 02/May/19 18:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-488787096
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 399 | trunk passed |
   | +1 | compile | 206 | trunk passed |
   | +1 | checkstyle | 56 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 820 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 137 | trunk passed |
   | 0 | spotbugs | 292 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 522 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 407 | the patch passed |
   | +1 | compile | 221 | the patch passed |
   | +1 | javac | 221 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 132 | the patch passed |
   | +1 | findbugs | 473 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 156 | hadoop-hdds in the patch failed. |
   | -1 | unit | 933 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 5600 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.web.client.TestVolume |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/792 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs yamllint |
   | uname | Linux c2f1007334ed 

[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831848#comment-16831848
 ] 

Hadoop QA commented on HDFS-14440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m  
1s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14440 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967673/HDFS-14440-HDFS-13891-05.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5f05b4940f25 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 40963f9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26741/testReport/ |
| Max. process+thread count | 980 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26741/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831855#comment-16831855
 ] 

Íñigo Goiri commented on HDFS-14440:


No need to do toString() in the LOG, otherwise you lose the benefits of using 
{}.
LOG takes care of doing the toString() if needed.

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch, HDFS-14440-HDFS-13891-05.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1464) Client should have different retry policies for different exceptions

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1464?focusedWorklogId=236460=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236460
 ]

ASF GitHub Bot logged work on HDDS-1464:


Author: ASF GitHub Bot
Created on: 02/May/19 18:07
Start Date: 02/May/19 18:07
Worklog Time Spent: 10m 
  Work Description: swagle commented on issue #785: HDDS-1464. Client 
should have different retry policies for different exceptions.
URL: https://github.com/apache/hadoop/pull/785#issuecomment-488773774
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236460)
Time Spent: 1h 10m  (was: 1h)

> Client should have different retry policies for different exceptions
> 
>
> Key: HDDS-1464
> URL: https://issues.apache.org/jira/browse/HDDS-1464
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Client should have different retry policies for different type of failures.
> For example, If a key write fails because of ContainerNotOpen exception, the 
> client should wait for a specified interval before retrying. But if the key 
> write fails because of lets say ratis leader election or request timeout, we 
> want the client to retry immediately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13522) Support observer node from Router-Based Federation

2019-05-02 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13522:
---
Attachment: Router+Observer RPC clogging.png

> Support observer node from Router-Based Federation
> --
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13522.001.patch, Router+Observer RPC clogging.png
>
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) Support observer node from Router-Based Federation

2019-05-02 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831823#comment-16831823
 ] 

CR Hota commented on HDFS-13522:


[~elgoiri] [~ayushtkn] [~csun] Thanks a lot for the discussion and 
[~surendrasingh] many thanks for your initial patch. Great to see more interest 
in this work.

Based on understanding, below is the problem statement and design exit criteria 
we may want to look at. Attached is a file that represents the problem 
statement pictorially.
h1. Problem statement 

Figure 1 in attached Router+Observer RPC clogging.png shows a typical RPC 
mechanism with respect to active namenode and observer namenode. In this case, 
observer namenodes are strictly processing read only requests from clients. 
Since there is no global lock or contention, RPC queue wait times are lower and 
processing times are significantly faster in Observer namenodes when compared 
to active namenode. With router based federation, a proxy layer is introduced 
that serves incoming client traffic on behalf of the client and performs the 
actual action against the downstream namenode. _This server proxy layer 
inherits the same server implementation that name nodes utilize_. With a single 
RPC queue in router, all read and writes will get intermingled again thus 
substantially diminishing the benefits of Observer namenode. Figure 2 
illustrates this behavior. This is particularly problematic for rpc latency 
sensitive real time engines such as Presto.
h1. Design criteria 

On a high level, the design of this feature should help achieve below 2 key 
objectives.
 # Separation of read vs write queuing in routers to begin with to fundamental 
continue maintaining a fast lane access for read calls. This work should create 
foundations to help to eventually separate read vs write per nameservice thus 
helping solve HDFS-14090.
 # Honor existing client configurations around whether or not to access 
Observer NN for certain use cases. For ex: a client can currently continue 
using ConfiguredFailOverProxy without connecting to Observer. If a client 
maintains such preference, routers should honor it and connect to Active 
Namenode for read calls as well.

 

 

> Support observer node from Router-Based Federation
> --
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13522.001.patch, Router+Observer RPC clogging.png
>
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1475) Fix OzoneContainer start method

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1475?focusedWorklogId=236447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236447
 ]

ASF GitHub Bot logged work on HDDS-1475:


Author: ASF GitHub Bot
Created on: 02/May/19 17:51
Start Date: 02/May/19 17:51
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #788: HDDS-1475 : 
Fix OzoneContainer start method.
URL: https://github.com/apache/hadoop/pull/788#discussion_r280529818
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
 ##
 @@ -172,7 +176,9 @@ private void stopContainerScrub() {
 if (scrubber == null) {
   return;
 }
-scrubber.down();
+if (scrubber.isHalted()) {
 
 Review comment:
   Yes, good catch!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236447)
Time Spent: 1.5h  (was: 1h 20m)

> Fix OzoneContainer start method
> ---
>
> Key: HDDS-1475
> URL: https://issues.apache.org/jira/browse/HDDS-1475
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In OzoneContainer start() we have 
> {code:java}
> startContainerScrub();
> writeChannel.start();
> readChannel.start();
> hddsDispatcher.init();
> hddsDispatcher.setScmId(scmId);{code}
>  
> Suppose here if readChannel.start() failed due to some reason, from 
> VersionEndPointTask, we try to start OzoneContainer again. This can cause an 
> issue for writeChannel.start() if it is already started. 
>  
> Fix the logic such a way that if service is started, don't attempt to start 
> the service again. Similar changes needed to be done for stop().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1464) Client should have different retry policies for different exceptions

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1464?focusedWorklogId=236440=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236440
 ]

ASF GitHub Bot logged work on HDDS-1464:


Author: ASF GitHub Bot
Created on: 02/May/19 17:45
Start Date: 02/May/19 17:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #785: HDDS-1464. Client 
should have different retry policies for different exceptions.
URL: https://github.com/apache/hadoop/pull/785#issuecomment-488766177
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 434 | trunk passed |
   | +1 | compile | 214 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 868 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 126 | trunk passed |
   | 0 | spotbugs | 298 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 501 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 444 | the patch passed |
   | +1 | compile | 203 | the patch passed |
   | +1 | javac | 203 | the patch passed |
   | +1 | checkstyle | 56 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 124 | the patch passed |
   | +1 | findbugs | 448 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 163 | hadoop-hdds in the patch failed. |
   | -1 | unit | 967 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 5530 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.om.TestOzoneManagerRestInterface |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.ozShell.TestS3Shell |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.scm.TestSCMMXBean |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-785/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/785 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
 

[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=236435=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236435
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 02/May/19 17:39
Start Date: 02/May/19 17:39
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-488764144
 
 
   /retest
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236435)
Time Spent: 50m  (was: 40m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831795#comment-16831795
 ] 

Ayush Saxena commented on HDFS-14460:
-

[~elgoiri] The rebase won't work as of now. You need to co-ordinate with INFRA 
team to allow.

> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1486) Ozone write fails in allocateBlock while writing >10MB files in multiple threads.

2019-05-02 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1486:

Description: 
15 node physical cluster. All Datanodes are up and running.
Client using 16 threads attempting to write 16000 x 10MB+ files using the 
FsStress utility 
(https://github.com/arp7/FsPerfTest) fails with the following error.
This is an intermittent issue.

*Server side exceptions*
{code}
19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
write failed, already allocated 0 blocks for this write.

19/04/18 14:33:23 WARN io.KeyOutputStream: Encountered exception 
java.io.IOException: Unexpected Storage Container Exception: 
java.util.concurrent.CompletionException: 
java.util.concurrent.CompletionException: 
org.apache.ratis.protocol.AlreadyClosedException: SlidingWindow$Client 
client-ADE7F801D3AD->RAFT is closed.. The last committed block length is 0, 
uncommitted data length is 10485760 retry count 0
{code}

*Client side exceptions*
{code}
FAILED org.apache.ratis.protocol.NotLeaderException: Server 
c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f is not the leader 
(f44c1413-0847-45e3-982d-ac3aec15dffc:10.17.200.23:9858). Request must be sent 
to leader., logIndex=0, commits[c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f:c131161, 
287eccfb-8461-419a-8732-529d042380b3:c131161, 
f44c1413-0847-45e3-982d-ac3aec15dffc:c131161]
{code} 

In the case of small key sizes (<1MB) and big key sizes with single thread, the 
above client side exceptions are infrequent. However, in the case of 
multithreaded 10MB+ size keys, the exceptions occur about 50% of the time and 
eventually cause write failures. I have attached one such failed pipeline logs.
 [^Datanode Logs.zip] 

  was:
15 node physical cluster. All Datanodes are up and running.
Client attempting to write 1600 x 100MB files using the FsStress utility 
(https://github.com/arp7/FsPerfTest) fails with the following error.
This is an intermittent issue.

*Server side exceptions*
{code}
19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
write failed, already allocated 0 blocks for this write.

19/04/18 14:33:23 WARN io.KeyOutputStream: Encountered exception 
java.io.IOException: Unexpected Storage Container Exception: 
java.util.concurrent.CompletionException: 
java.util.concurrent.CompletionException: 
org.apache.ratis.protocol.AlreadyClosedException: SlidingWindow$Client 
client-ADE7F801D3AD->RAFT is closed.. The last committed block length is 0, 
uncommitted data length is 10485760 retry count 0
{code}

*Client side exceptions*
{code}
FAILED org.apache.ratis.protocol.NotLeaderException: Server 
c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f is not the leader 
(f44c1413-0847-45e3-982d-ac3aec15dffc:10.17.200.23:9858). Request must be sent 
to leader., logIndex=0, commits[c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f:c131161, 
287eccfb-8461-419a-8732-529d042380b3:c131161, 
f44c1413-0847-45e3-982d-ac3aec15dffc:c131161]
{code} 

In the case of small key sizes (<1MB) and big key sizes with single thread, the 
above client side exceptions are infrequent. However, in the case of 
multithreaded 10MB+ size keys, the exceptions occur about 50% of the time and 
eventually cause write failures. I have attached one such failed pipeline logs.
 [^Datanode Logs.zip] 


> Ozone write fails in allocateBlock while writing >10MB files in multiple 
> threads.
> -
>
> Key: HDDS-1486
> URL: https://issues.apache.org/jira/browse/HDDS-1486
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Priority: Major
>  Labels: intermittent
> Attachments: Datanode Logs.zip
>
>
> 15 node physical cluster. All Datanodes are up and running.
> Client using 16 threads attempting to write 16000 x 10MB+ files using the 
> FsStress utility 
> (https://github.com/arp7/FsPerfTest) fails with the following error.
> This is an intermittent issue.
> *Server side exceptions*
> {code}
> 19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
> write failed, already allocated 0 blocks for this write.
> 19/04/18 14:33:23 WARN io.KeyOutputStream: Encountered exception 
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.AlreadyClosedException: SlidingWindow$Client 
> client-ADE7F801D3AD->RAFT is closed.. The last committed block length is 0, 
> uncommitted data length is 10485760 retry count 0
> {code}
> *Client side exceptions*
> {code}
> FAILED org.apache.ratis.protocol.NotLeaderException: Server 
> c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f is not the leader 
> (f44c1413-0847-45e3-982d-ac3aec15dffc:10.17.200.23:9858). Request must be 
> 

[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831785#comment-16831785
 ] 

Hudson commented on HDFS-14460:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16494 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16494/])
HDFS-14460. DFSUtil#getNamenodeWebAddr should return HTTPS address based 
(inigoiri: rev 865c3289308327788f3bed355864c510deb40956)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java


> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1486) Ozone write fails in allocateBlock while writing >10MB files in multiple threads.

2019-05-02 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1486:

Summary: Ozone write fails in allocateBlock while writing >10MB files in 
multiple threads.  (was: Ozone write fails in allocateBlock while writing >1MB 
files in multiple threads.)

> Ozone write fails in allocateBlock while writing >10MB files in multiple 
> threads.
> -
>
> Key: HDDS-1486
> URL: https://issues.apache.org/jira/browse/HDDS-1486
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Priority: Major
>  Labels: intermittent
> Attachments: Datanode Logs.zip
>
>
> 15 node physical cluster. All Datanodes are up and running.
> Client attempting to write 1600 x 100MB files using the FsStress utility 
> (https://github.com/arp7/FsPerfTest) fails with the following error.
> This is an intermittent issue.
> *Server side exceptions*
> {code}
> 19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
> write failed, already allocated 0 blocks for this write.
> 19/04/18 14:33:23 WARN io.KeyOutputStream: Encountered exception 
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.AlreadyClosedException: SlidingWindow$Client 
> client-ADE7F801D3AD->RAFT is closed.. The last committed block length is 0, 
> uncommitted data length is 10485760 retry count 0
> {code}
> *Client side exceptions*
> {code}
> FAILED org.apache.ratis.protocol.NotLeaderException: Server 
> c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f is not the leader 
> (f44c1413-0847-45e3-982d-ac3aec15dffc:10.17.200.23:9858). Request must be 
> sent to leader., logIndex=0, 
> commits[c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f:c131161, 
> 287eccfb-8461-419a-8732-529d042380b3:c131161, 
> f44c1413-0847-45e3-982d-ac3aec15dffc:c131161]
> {code} 
> In the case of small key sizes (<1MB) and big key sizes with single thread, 
> the above client side exceptions are infrequent. However, in the case of 
> multithreaded 10MB+ size keys, the exceptions occur about 50% of the time and 
> eventually cause write failures. I have attached one such failed pipeline 
> logs.
>  [^Datanode Logs.zip] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1486) Ozone write fails in allocateBlock while writing >1MB files in multiple threads.

2019-05-02 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1486:

Description: 
15 node physical cluster. All Datanodes are up and running.
Client attempting to write 1600 x 100MB files using the FsStress utility 
(https://github.com/arp7/FsPerfTest) fails with the following error.
This is an intermittent issue.

*Server side exceptions*
{code}
19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
write failed, already allocated 0 blocks for this write.

19/04/18 14:33:23 WARN io.KeyOutputStream: Encountered exception 
java.io.IOException: Unexpected Storage Container Exception: 
java.util.concurrent.CompletionException: 
java.util.concurrent.CompletionException: 
org.apache.ratis.protocol.AlreadyClosedException: SlidingWindow$Client 
client-ADE7F801D3AD->RAFT is closed.. The last committed block length is 0, 
uncommitted data length is 10485760 retry count 0
{code}

*Client side exceptions*
{code}
FAILED org.apache.ratis.protocol.NotLeaderException: Server 
c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f is not the leader 
(f44c1413-0847-45e3-982d-ac3aec15dffc:10.17.200.23:9858). Request must be sent 
to leader., logIndex=0, commits[c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f:c131161, 
287eccfb-8461-419a-8732-529d042380b3:c131161, 
f44c1413-0847-45e3-982d-ac3aec15dffc:c131161]
{code} 

In the case of small key sizes (<1MB) and big key sizes with single thread, the 
above client side exceptions are infrequent. However, in the case of 
multithreaded 10MB+ size keys, the exceptions occur about 50% of the time and 
eventually cause write failures. I have attached one such failed pipeline logs.
 [^Datanode Logs.zip] 

> Ozone write fails in allocateBlock while writing >1MB files in multiple 
> threads.
> 
>
> Key: HDDS-1486
> URL: https://issues.apache.org/jira/browse/HDDS-1486
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Priority: Major
>  Labels: intermittent
> Attachments: Datanode Logs.zip
>
>
> 15 node physical cluster. All Datanodes are up and running.
> Client attempting to write 1600 x 100MB files using the FsStress utility 
> (https://github.com/arp7/FsPerfTest) fails with the following error.
> This is an intermittent issue.
> *Server side exceptions*
> {code}
> 19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
> write failed, already allocated 0 blocks for this write.
> 19/04/18 14:33:23 WARN io.KeyOutputStream: Encountered exception 
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.AlreadyClosedException: SlidingWindow$Client 
> client-ADE7F801D3AD->RAFT is closed.. The last committed block length is 0, 
> uncommitted data length is 10485760 retry count 0
> {code}
> *Client side exceptions*
> {code}
> FAILED org.apache.ratis.protocol.NotLeaderException: Server 
> c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f is not the leader 
> (f44c1413-0847-45e3-982d-ac3aec15dffc:10.17.200.23:9858). Request must be 
> sent to leader., logIndex=0, 
> commits[c6e64cc4-91e9-4b36-83e4-6d84a4e71b7f:c131161, 
> 287eccfb-8461-419a-8732-529d042380b3:c131161, 
> f44c1413-0847-45e3-982d-ac3aec15dffc:c131161]
> {code} 
> In the case of small key sizes (<1MB) and big key sizes with single thread, 
> the above client side exceptions are infrequent. However, in the case of 
> multithreaded 10MB+ size keys, the exceptions occur about 50% of the time and 
> eventually cause write failures. I have attached one such failed pipeline 
> logs.
>  [^Datanode Logs.zip] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1486) Ozone write fails in allocateBlock while writing >1MB files in multiple threads.

2019-05-02 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1486:

Attachment: Datanode Logs.zip

> Ozone write fails in allocateBlock while writing >1MB files in multiple 
> threads.
> 
>
> Key: HDDS-1486
> URL: https://issues.apache.org/jira/browse/HDDS-1486
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Priority: Major
>  Labels: intermittent
> Attachments: Datanode Logs.zip
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8631) WebHDFS : Support get/setQuota

2019-05-02 Thread Xue Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831784#comment-16831784
 ] 

Xue Liu commented on HDFS-8631:
---

Hi [~surendrasingh]

Are you still actively working on this issue? If no, I would be glad to take 
this JIRA, as one of our production tool would benefit a lot from it!

> WebHDFS : Support get/setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831781#comment-16831781
 ] 

Ayush Saxena commented on HDFS-14440:
-

Have uploaded patch v5 with said changes.
Pls review!!!

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch, HDFS-14440-HDFS-13891-05.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14440:

Attachment: HDFS-14440-HDFS-13891-05.patch

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch, HDFS-14440-HDFS-13891-05.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1486) Ozone write fails in allocateBlock while writing >1MB files in multiple threads.

2019-05-02 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1486:
---

 Summary: Ozone write fails in allocateBlock while writing >1MB 
files in multiple threads.
 Key: HDDS-1486
 URL: https://issues.apache.org/jira/browse/HDDS-1486
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Aravindan Vijayan






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1473) DataNode ID file should be human readable

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1473?focusedWorklogId=236421=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236421
 ]

ASF GitHub Bot logged work on HDDS-1473:


Author: ASF GitHub Bot
Created on: 02/May/19 17:10
Start Date: 02/May/19 17:10
Worklog Time Spent: 10m 
  Work Description: swagle commented on issue #781: HDDS-1473. DataNode ID 
file should be human readable.
URL: https://github.com/apache/hadoop/pull/781#issuecomment-488754319
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236421)
Time Spent: 2h 50m  (was: 2h 40m)

> DataNode ID file should be human readable
> -
>
> Key: HDDS-1473
> URL: https://issues.apache.org/jira/browse/HDDS-1473
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The DataNode ID file should be human readable to make debugging easier. We 
> should use YAML as we have used it elsewhere for meta files.
> Currently it is a binary file whose contents are protobuf encoded. This is a 
> tiny file read once on startup, so performance is not a concern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831774#comment-16831774
 ] 

Íñigo Goiri commented on HDFS-14460:


Thanks [~crh] for the patch.
Committed to trunk.

> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14460:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831770#comment-16831770
 ] 

Íñigo Goiri commented on HDFS-14460:


Let me commit to trunk.
Regarding HDFS-13891... I don't know how that will go; let's see but I'm losing 
faith.

> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1485) Ozone writes fail when single threaded client writes 100MB files repeatedly.

2019-05-02 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1485:
---

 Summary: Ozone writes fail when single threaded client writes 
100MB files repeatedly. 
 Key: HDDS-1485
 URL: https://issues.apache.org/jira/browse/HDDS-1485
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Aravindan Vijayan


*Environment*
26 node physical cluster.
All Datanodes are up and running.
Client attempting to write 1600 x 100MB files using the FsStress utility 
(https://github.com/arp7/FsPerfTest) fails with the following error. 

{code}
19/05/02 09:58:49 ERROR storage.BlockOutputStream: Unexpected Storage Container 
Exception:
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 424 does not exist
at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:539)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:616)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

It looks like a corruption in the container metadata. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14460) DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured

2019-05-02 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831766#comment-16831766
 ] 

CR Hota commented on HDFS-14460:


[~elgoiri] Thanks for the confirmation.

Since this is a well contained and fairly small change, can we commit this and 
rebase HDFS-13891 branch? Will do the HDFS-13955 change following this.

> DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy 
> configured
> -
>
> Key: HDFS-14460
> URL: https://issues.apache.org/jira/browse/HDFS-14460
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14460.001.patch, HDFS-14460.002.patch, 
> HDFS-14460.003.patch, HDFS-14460.004.patch
>
>
> DFSUtil#getNamenodeWebAddr does a look-up of HTTP address irrespective of 
> policy configured. It should instead look at the policy configured and return 
> appropriate web address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=236415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236415
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 02/May/19 16:58
Start Date: 02/May/19 16:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #793: HDDS-1224. 
Restructure code to validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#issuecomment-488750605
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 409 | trunk passed |
   | +1 | compile | 211 | trunk passed |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 876 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 137 | trunk passed |
   | 0 | spotbugs | 282 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 477 | trunk passed |
   | -0 | patch | 308 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 422 | the patch passed |
   | +1 | compile | 203 | the patch passed |
   | +1 | javac | 203 | the patch passed |
   | -0 | checkstyle | 29 | hadoop-hdds: The patch generated 41 new + 0 
unchanged - 0 fixed = 41 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 140 | the patch passed |
   | -1 | findbugs | 249 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   | -1 | findbugs | 208 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 159 | hadoop-hdds in the patch failed. |
   | -1 | unit | 50 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 4612 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkIndex; locked 96% of 
time  Unsynchronized access at BlockInputStream.java:96% of time  
Unsynchronized access at BlockInputStream.java:[line 303] |
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-793/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/793 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 9943a2f5c261 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a42745 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-793/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-793/1/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-793/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-793/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-793/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-793/1/testReport/ |
   | Max. process+thread count | 370 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/client 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-793/1/console |
   | versions | git=2.7.4 maven=3.3.9 

[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831763#comment-16831763
 ] 

Íñigo Goiri commented on HDFS-14440:


In the javadoc, "else" instead of "else".
As we are doing this, it might be good to add a log debug message when we are 
doing "createLocation = existingLocation"; it wasn't there before but it would 
help debugging.

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14426) RBF: Add delegation token total count as one of the federation metrics

2019-05-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831760#comment-16831760
 ] 

Íñigo Goiri commented on HDFS-14426:


It looks like we are up to date:
https://github.com/apache/hadoop/commits/HDFS-13891
It looks like it includes HDFS-14374.


> RBF: Add delegation token total count as one of the federation metrics
> --
>
> Key: HDFS-14426
> URL: https://issues.apache.org/jira/browse/HDFS-14426
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14426-HDFS-13891.001.patch, HDFS-14426.001.patch
>
>
> Currently router doesn't report the total number of current valid delegation 
> tokens it has, but this piece of information is useful for monitoring and 
> understanding the real time situation of tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1395) Key write fails with BlockOutputStream has been closed exception

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1395?focusedWorklogId=236412=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236412
 ]

ASF GitHub Bot logged work on HDDS-1395:


Author: ASF GitHub Bot
Created on: 02/May/19 16:52
Start Date: 02/May/19 16:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #749: HDDS-1395. Key 
write fails with BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-488748715
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 423 | trunk passed |
   | -1 | compile | 36 | hadoop-hdds in trunk failed. |
   | -1 | compile | 25 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 784 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 244 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 433 | trunk passed |
   | -0 | patch | 275 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 406 | the patch passed |
   | +1 | compile | 205 | the patch passed |
   | -1 | javac | 86 | hadoop-hdds generated 11 new + 0 unchanged - 0 fixed = 
11 total (was 0) |
   | +1 | checkstyle | 54 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 645 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 120 | the patch passed |
   | +1 | findbugs | 427 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 140 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1176 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 5412 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.ozShell.TestS3Shell |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/749 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux c024cdf21f02 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a42745 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/8/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/8/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/8/artifact/out/diff-compile-javac-hadoop-hdds.txt
 |
   | unit | 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831742#comment-16831742
 ] 

Eric Badger commented on HDDS-1458:
---

bq. Jonathan Eagles Eric Badger, please speak up if you have concerns to rename 
docker profile to dist profile.

As I commented in YARN-7129, I am against adding mandatory Docker image builds 
to the default Hadoop build process. The community came to this same consensus 
via [this mailing list thread| 
https://lists.apache.org/thread.html/c63f404bc44f8f249cbc98ee3f6633384900d07e2308008fe4620150@%3Ccommon-dev.hadoop.apache.org%3E].
 

However, I am not an HDDS developer and do not have proper insight into HDDS 
development. So I can only give my thoughts on this from a YARN perspective. 
Maybe this is a great idea for HDDS, maybe it's not. Since I don't know 
anything about HDDS, I can't really give you an opinion. But I think that it 
definitely warrants getting more eyes and reviews on this from the HDDS 
community 

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=236398=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236398
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 02/May/19 16:30
Start Date: 02/May/19 16:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r280499682
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-all.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# Test executor to test all the compose/*/test.sh test scripts.
+#
+
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
+ALL_RESULT_DIR="$SCRIPT_DIR/result"
+
+mkdir -p "$ALL_RESULT_DIR"
+rm "$ALL_RESULT_DIR/*"
+
+RESULT=0
+IFS=$'\n'
+# shellcheck disable=SC2044
+for test in $(find $SCRIPT_DIR -name test.sh); do
 
 Review comment:
   shellcheck:20: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236398)
Time Spent: 1h 40m  (was: 1.5h)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can include common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just 
> this test (as it's saved to the compose/ozones3 directory. And it's more 
> flexible.
> Other example, where 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831740#comment-16831740
 ] 

Elek, Marton commented on HDDS-1458:


Thank you very much to give me more details.

I have slightly different view of this problem. If I understood well there is 
no technical limitations as of now it's more like a semantic question what is 
the good usage of docker or what is well designed. To be honest it's hard to 
judge for me. I think there is more than one way to do it well.

For example I can see are two different use cases:

  1. starting docker based pseudo cluster from a (released or dev) 
distribution. In this case the mount is not a problem. I think here we should 
use mounting to ensure we have exactly the same bits inside and outside. I 
can't see any problem here.  

 2. The second use case is to provide independent, portable docker-compose 
files. We have this one even now:

{code}
docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
docker run apache/ozone cat docker-config > docker-config 
docker-compose up -d
{code}

3. I have slightly different experience with docker image creation. I now how 
the layering works but the current structure of the project (especially the 
creation of two 100Mb shaded jar files) can be handled very well. We need new 
layers anyway even if we changed only one class file in the project.

4. I didn't check the source of the maven docker plugin but based on the output 
it has some additional copy. I need to check.

5. The release tar file also contains the compose directory. I think it's a 
very important part. With mounting the distribution package from the 
docker-compose files we can provide the easiest UX to start a pseudo cluster 
without any additional image creation.

6. FYI: the docker image creation as part of the maven build is added with the 
latest k8s patches. I agree with you that it can help at same cases. But I 
don't think that we need to replace the volume mounts in the compose files.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=236400=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236400
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 02/May/19 16:30
Start Date: 02/May/19 16:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-488741108
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 80 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 2 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 613 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 977 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 565 | the patch passed |
   | +1 | compile | 276 | the patch passed |
   | +1 | javac | 276 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | shellcheck | 4 | The patch generated 2 new + 0 unchanged - 1 fixed = 
2 total (was 1) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 760 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 238 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1206 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 5635 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.web.client.TestVolume |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml |
   | uname | Linux 6f224256abb3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a42745 |
   | Default Java | 1.8.0_191 |
   | shellcheck | 

[jira] [Work logged] (HDDS-1424) Support multi-container robot test execution

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=236399=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236399
 ]

ASF GitHub Bot logged work on HDDS-1424:


Author: ASF GitHub Bot
Created on: 02/May/19 16:30
Start Date: 02/May/19 16:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #726: HDDS-1424. 
Support multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r280499696
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-single.sh
 ##
 @@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Single test executor, can start a single robot test in any running container.
+#
+
+
+COMPOSE_DIR="$PWD"
+export COMPOSE_DIR
+
+if [[ ! -f "$COMPOSE_DIR/docker-compose.yaml" ]]; then
+echo "docker-compose.yaml is missing from the current dir. Please run this 
command from a docker-compose environment."
+exit 1
+fi
+if (( $# != 2 )); then
+cat << EOF
+   Single test executor
+
+   Usage:
+
+ ../test-single.sh  
+
+container: Name of the running docker-compose container 
(docker-compose.yaml is required in the current directory)
+
+robot_test: name of the robot test or directory relative to the 
smoketest dir.
+
+
+
+EOF
+
+fi
+
+# shellcheck source=testlib.sh
+source "$COMPOSE_DIR/../testlib.sh"
 
 Review comment:
   shellcheck:1: note: Not following: testlib.sh: openBinaryFile: does not 
exist (No such file or directory) [SC1091]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236399)
Time Spent: 1h 50m  (was: 1h 40m)

> Support multi-container robot test execution
> 
>
> Key: HDDS-1424
> URL: https://issues.apache.org/jira/browse/HDDS-1424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework 
> based test scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a 
> given host machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based 
> environments AND execute the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to 
> execute different robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>TESTS=("basic")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("auditparser")
>execute_tests ozone "${TESTS[@]}"
>TESTS=("ozonefs")
>execute_tests ozonefs "${TESTS[@]}"
>TESTS=("basic")
>execute_tests ozone-hdfs "${TESTS[@]}"
>TESTS=("s3")
>execute_tests ozones3 "${TESTS[@]}"
>TESTS=("security")
>execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash 
> lists require additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the 
> test definitions AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh 
> can 

[jira] [Comment Edited] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831724#comment-16831724
 ] 

Eric Yang edited comment on HDDS-1458 at 5/2/19 4:22 PM:
-

[~elek] I will rebase the patch to current trunk today.

{quote}Can you please describe what is the problem exactly? I have some 
concerns to create a docker image with each build. It's time and space 
consuming. {quote}

The current ozone docker image is not easily transportable.  A well design 
docker image should have no host level binary dependency.  In order to share 
the ozone docker image to another host, the ozone tarball location on the host 
must be copied to a second host in order to reproducible the in-sync state as 
the docker image from the first host, and use docker-compose to lock host level 
binaries and docker image together to produce a functional system.  This is not 
intended approach to use Docker image.

If there is no change to files that is used to build docker image layer, it 
will simply use a reference count instead of regenerating entire layer.  Each 
line in the Dockerfile produce a docker image immutable layer, and docker is 
pretty good to cache the output without having to rebuild everything from 
scratch.  A well designed docker image build process may take minutes in it's 
first time build, but subsequence build only takes sub-seconds to perform.  
Unless the layers have changed, otherwise they do not take up more space than a 
reference count.  Ozone tar stitching is same as building a layer of Docker but 
at host level.  Since any kind of system test, we are already doing tar/untar 
operations.  The cost of building a Ozone image is same as running tar expand, 
the cost can easily be justified.  The misconception of docker build process is 
expensive, it really depends on how the code is structured.  If the high 
frequency changes are placed toward end of the image creation, then the time 
spend in docker build can be really small.  We can always skip docker image 
build process with -DskipDocker.  This is similiar to -DskipShade for people 
that don't work in those areas.  

The benefit of using docker project separated from dist project allows builder 
to have a choice to build tarball only, or build docker image only.  Each 
subproject has a single purpose and can repeat standalone without building the 
whole project.  I think it is a great improvement for developers to developing 
small units rather than doing full build each time.

{quote}I believe it's more effective to use just the hadoop-runner image and 
mount the built artifact. I would like to understand the problem in more 
details to find an effective solution. Wouldn't be enough to use the compose 
files form the target folder?{quote}

Docker image are designed to host binary executables as layers of immutable 
file system changes.  This provides predictable out come when binary are 
swapped out between container instances.  When binary executables are outside 
of docker container, the stability of the docker instance depends on the 
external mounted binaries.  By mounting external executable binaries, it become 
less reproducible because it has heavy dependency on external mount point state 
being in sync with container image states.  Standalone docker image are more 
effective way to share containers than docker-compose stitch host level 
binaries together with a empty container.

{quote}ps: I didn't check the patch yet, as it's conflicting but in case of 
having multiple fundamental changes, can be better to commit it in multiple 
smaller parts (IMHO){quote}

It make sense to restructure the patch into smaller parts.  I just happen to 
discover one problem after another when I was working on completely separated 
goals.  I can't really do the original work until the pre-requisites are 
fulfilled.  I was too deep on making refactoring so I make a big patch for my 
own self-tracking.  I will break them up into smaller issues and patches.  
Thanks for the quick review.


was (Author: eyang):
[~elek] I will rebase the patch to current trunk today.

{quote}Can you please describe what is the problem exactly? I have some 
concerns to create a docker image with each build. It's time and space 
consuming. {quote}

The current ozone docker image is not easily transportable.  A well design 
docker image should have no host level binary dependency.  In order to share 
the ozone docker image to another host, the ozone tarball location on the host 
must be copied to a second host in order to reproducible the in-sync state as 
the docker image from the first host, and use docker-compose to lock host level 
binaries and docker image together to produce a functional system.  This is not 
intended approach to use Docker image.

If there is no change to files that is used to build docker image layer, it 
will simply use a reference count instead of 

[jira] [Commented] (HDFS-14426) RBF: Add delegation token total count as one of the federation metrics

2019-05-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831732#comment-16831732
 ] 

Ayush Saxena commented on HDFS-14426:
-

Feels like rebased now.
Anyone giving a check?

> RBF: Add delegation token total count as one of the federation metrics
> --
>
> Key: HDFS-14426
> URL: https://issues.apache.org/jira/browse/HDFS-14426
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14426-HDFS-13891.001.patch, HDFS-14426.001.patch
>
>
> Currently router doesn't report the total number of current valid delegation 
> tokens it has, but this piece of information is useful for monitoring and 
> understanding the real time situation of tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831724#comment-16831724
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] I will rebase the patch to current trunk today.

{quote}Can you please describe what is the problem exactly? I have some 
concerns to create a docker image with each build. It's time and space 
consuming. {quote}

The current ozone docker image is not easily transportable.  A well design 
docker image should have no host level binary dependency.  In order to share 
the ozone docker image to another host, the ozone tarball location on the host 
must be copied to a second host in order to reproducible the in-sync state as 
the docker image from the first host, and use docker-compose to lock host level 
binaries and docker image together to produce a functional system.  This is not 
intended approach to use Docker image.

If there is no change to files that is used to build docker image layer, it 
will simply use a reference count instead of regenerating entire layer.  Each 
line in the Dockerfile produce a docker image immutable layer, and docker is 
pretty good to cache the output without having to rebuild everything from 
scratch.  A well designed docker image build process may take minutes in it's 
first time build, but subsequence build only takes sub-seconds to perform.  
Unless the layers have changed, otherwise they do not take up more space than a 
reference count.  Ozone tar stitching is same as building a layer of Docker but 
at host level.  Since any kind of system test, we are already doing tar/untar 
operations. 
 The cost of building a Ozone image is same as running tar expand, the cost can 
easily be justified.  The misconception of docker build process is expensive, 
it really depends on how the code is structured.  If the high frequency changes 
are placed toward end of the image creation, then the time spend in docker 
build can be really small.  We can always skip docker image build process with 
-DskipDocker.  This is similiar to -DskipShade for people that don't work in 
those areas.

{quote}I believe it's more effective to use just the hadoop-runner image and 
mount the built artifact. I would like to understand the problem in more 
details to find an effective solution. Wouldn't be enough to use the compose 
files form the target folder?{quote}

Docker image are designed to host binary executables as layers of immutable 
file system changes.  This provides predictable out come when binary are 
swapped out between container instances.  When binary executables are outside 
of docker container, the stability of the docker instance depends on the 
external mounted binaries.  By mounting external executable binaries, it become 
less reproducible because it has heavy dependency on external mount point state 
being in sync with container image states.  Standalone docker image are more 
effective way to share containers than docker-compose stitch host level 
binaries together with a empty container.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13189) Standby NameNode should roll active edit log when checkpointing

2019-05-02 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831706#comment-16831706
 ] 

Chao Sun commented on HDFS-13189:
-

[~starphin]. Cool - somehow didn't notice HDFS-14378. Will take a look.

> Standby NameNode should roll active edit log when checkpointing
> ---
>
> Key: HDFS-13189
> URL: https://issues.apache.org/jira/browse/HDFS-13189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chao Sun
>Priority: Minor
>
> When the SBN is doing checkpointing, it will hold the {{cpLock}}. In the 
> current implementation of edit log tailer thread, it will first check and 
> roll active edit log, and then tail and apply edits. In the case of 
> checkpointing, it will be blocked on the {{cpLock}} and will not roll the 
> edit log.
> It seems there is no dependency between the edit log roll and tailing edits, 
> so a better may be to do these in separate threads. This will be helpful for 
> people who uses the observer feature without in-progress edit log tailing. 
> An alternative is to configure 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} and 
> {{dfs.namenode.edit.log.autoroll.check.interval.ms}} to let ANN roll its own 
> log more frequently in case SBN is stuck on the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14349) Edit log may be rolled more frequently than necessary with multiple Standby nodes

2019-05-02 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795553#comment-16795553
 ] 

star edited comment on HDFS-14349 at 5/2/19 3:41 PM:
-

It has been verified that edit log roll will be triggered by multiple SNN.

I‘ve proposed a improvement issue HDFS-14378 to put things right once and for 
all. The main idea is to make ANN rolling its edit log and download fsimage 
randomly from one SNN. SNN will just do checkpointing and tail editlogs. You 
are welcome to make a review or make contributions.


was (Author: starphin):
Yes, it seems that normal edit log roll will be triggered by multiple SNN. I am 
doing unit tests to verify the action. 

> Edit log may be rolled more frequently than necessary with multiple Standby 
> nodes
> -
>
> Key: HDFS-14349
> URL: https://issues.apache.org/jira/browse/HDFS-14349
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Ekanth Sethuramalingam
>Priority: Major
>
> When HDFS-14317 was fixed, we tackled the problem that in a cluster with 
> in-progress edit log tailing enabled, a Standby NameNode may _never_ roll the 
> edit logs, which can eventually cause data loss.
> Unfortunately, in the process, it was made so that if there are multiple 
> Standby NameNodes, they will all roll the edit logs at their specified 
> frequency, so the edit log will be rolled X times more frequently than they 
> should be (where X is the number of Standby NNs). This is not as bad as the 
> original bug since rolling frequently does not affect correctness or data 
> availability, but may degrade performance by creating more edit log segments 
> than necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14464) Remove unnecessary log message from DFSInputStream

2019-05-02 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-14464:
-

 Summary: Remove unnecessary log message from DFSInputStream
 Key: HDFS-14464
 URL: https://issues.apache.org/jira/browse/HDFS-14464
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


This was added by HDFS-8703.  This usually don't come out unless user makes 
0-byte read calls, which does happen.

{code:java}
 if (ret == 0) {
   DFSClient.LOG.warn("zero");
 }
{code}

This was removed by HDFS-8905 in trunk and 3.x, but remained in 2.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13189) Standby NameNode should roll active edit log when checkpointing

2019-05-02 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831694#comment-16831694
 ] 

star commented on HDFS-13189:
-

I‘ve proposed a improvement issue HDFS-14378 to put things right once and for 
all. The main idea is to make ANN rolling its edit log and download fsimage 
randomly from one SNN. SNN will just do checkpointing and tail editlogs. You 
are welcome to make a review or make contributions.

> Standby NameNode should roll active edit log when checkpointing
> ---
>
> Key: HDFS-13189
> URL: https://issues.apache.org/jira/browse/HDFS-13189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chao Sun
>Priority: Minor
>
> When the SBN is doing checkpointing, it will hold the {{cpLock}}. In the 
> current implementation of edit log tailer thread, it will first check and 
> roll active edit log, and then tail and apply edits. In the case of 
> checkpointing, it will be blocked on the {{cpLock}} and will not roll the 
> edit log.
> It seems there is no dependency between the edit log roll and tailing edits, 
> so a better may be to do these in separate threads. This will be helpful for 
> people who uses the observer feature without in-progress edit log tailing. 
> An alternative is to configure 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} and 
> {{dfs.namenode.edit.log.autoroll.check.interval.ms}} to let ANN roll its own 
> log more frequently in case SBN is stuck on the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1224:
-
Labels: pull-request-available  (was: )

> Restructure code to validate the response from server in the Read path
> --
>
> Key: HDDS-1224
> URL: https://issues.apache.org/jira/browse/HDDS-1224
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>
> In the read path, the validation of the response while reading the data from 
> the datanodes happen in XceiverClientGrpc as well as additional  Checksum 
> verification happens in Ozone client to verify the read chunk response. The 
> aim of this Jira is to modify the function call to take a validator function 
> as a part of reading data so all validation can happen in a single unified 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=236362=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236362
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 02/May/19 15:27
Start Date: 02/May/19 15:27
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #793: HDDS-1224. 
Restructure code to validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236362)
Time Spent: 10m
Remaining Estimate: 0h

> Restructure code to validate the response from server in the Read path
> --
>
> Key: HDDS-1224
> URL: https://issues.apache.org/jira/browse/HDDS-1224
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the read path, the validation of the response while reading the data from 
> the datanodes happen in XceiverClientGrpc as well as additional  Checksum 
> verification happens in Ozone client to verify the read chunk response. The 
> aim of this Jira is to modify the function call to take a validator function 
> as a part of reading data so all validation can happen in a single unified 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1484) Add unit tests for writing concurrently on different type of pipelines by multiple threads

2019-05-02 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1484:
-

 Summary: Add unit tests for writing concurrently on different type 
of pipelines by multiple threads
 Key: HDDS-1484
 URL: https://issues.apache.org/jira/browse/HDDS-1484
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee


This Jira aims to add unit tests for writing concurrently in single as well as 
3 node pipelines with different sized data using multiple threads



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831671#comment-16831671
 ] 

Hadoop QA commented on HDFS-14440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
56s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
20s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14440 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967642/HDFS-14440-HDFS-13891-04.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 614c7f958a90 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / aeb3b61 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26740/testReport/ |
| Max. process+thread count | 1441 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26740/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831638#comment-16831638
 ] 

Ayush Saxena commented on HDFS-14440:
-

Thanx [~elgoiri] for the review.
Have uploaded patch with the said change.
Pls Review!!!

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14440:

Attachment: HDFS-14440-HDFS-13891-04.patch

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831585#comment-16831585
 ] 

Hudson commented on HDDS-1482:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16491 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16491/])
HDDS-1482. Use strongly typed codec implementations for the S3Table (elek: rev 
4605db369e4315f6d28e6c050acd3f3c6fbec45c)
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/codec/TestS3SecretValueCodec.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/S3SecretValueCodec.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/S3SecretValue.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java


> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=236248=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236248
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 02/May/19 12:24
Start Date: 02/May/19 12:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#issuecomment-488652122
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 76 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1200 | trunk passed |
   | +1 | compile | 1002 | trunk passed |
   | +1 | checkstyle | 143 | trunk passed |
   | -1 | mvnsite | 43 | server-scm in trunk failed. |
   | +1 | shadedclient | 1166 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone/integration-test |
   | -1 | findbugs | 30 | server-scm in trunk failed. |
   | +1 | javadoc | 172 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 240 | the patch passed |
   | +1 | compile | 941 | the patch passed |
   | +1 | javac | 941 | the patch passed |
   | +1 | checkstyle | 143 | the patch passed |
   | +1 | mvnsite | 251 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 8 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone/integration-test |
   | +1 | findbugs | 179 | the patch passed |
   | +1 | javadoc | 70 | hadoop-hdds generated 0 new + 14 unchanged - 6 fixed = 
14 total (was 20) |
   | +1 | javadoc | 47 | common in the patch passed. |
   | +1 | javadoc | 28 | config in the patch passed. |
   | +1 | javadoc | 30 | hadoop-hdds_server-scm generated 0 new + 5 unchanged - 
6 fixed = 5 total (was 11) |
   | +1 | javadoc | 26 | integration-test in the patch passed. |
   ||| _ Other Tests _ |
   | -1 | unit | 161 | hadoop-hdds in the patch failed. |
   | +1 | unit | 84 | common in the patch passed. |
   | +1 | unit | 31 | config in the patch passed. |
   | +1 | unit | 128 | server-scm in the patch passed. |
   | -1 | unit | 828 | integration-test in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 8045 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.ozShell.TestS3Shell |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.scm.TestSCMMXBean |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineUtils |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | 

[jira] [Work logged] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1482?focusedWorklogId=236247=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236247
 ]

ASF GitHub Bot logged work on HDDS-1482:


Author: ASF GitHub Bot
Created on: 02/May/19 12:13
Start Date: 02/May/19 12:13
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #789: HDDS-1482. Use 
strongly typed codec implementations for the S3Table.
URL: https://github.com/apache/hadoop/pull/789
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236247)
Time Spent: 0.5h  (was: 20m)

> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1482) Use strongly typed codec implementations for the S3Table

2019-05-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1482:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Use strongly typed codec implementations for the S3Table
> 
>
> Key: HDDS-1482
> URL: https://issues.apache.org/jira/browse/HDDS-1482
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-864 added the implementation for Strongly typed codec implementation for 
> the tables of OmMetadataManager.
>  
> Tables which are added as part of S3 Implementation are not using this. This 
> Jira is address to this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=236234=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236234
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 02/May/19 11:30
Start Date: 02/May/19 11:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-488638488
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 20 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1125 | trunk passed |
   | +1 | compile | 960 | trunk passed |
   | +1 | checkstyle | 155 | trunk passed |
   | +1 | mvnsite | 216 | trunk passed |
   | +1 | shadedclient | 764 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdds/docs hadoop-ozone/dist |
   | +1 | findbugs | 147 | trunk passed |
   | +1 | javadoc | 160 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | -1 | mvninstall | 21 | dist in the patch failed. |
   | +1 | compile | 957 | the patch passed |
   | +1 | javac | 957 | the patch passed |
   | +1 | checkstyle | 148 | the patch passed |
   | +1 | mvnsite | 179 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 34 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdds/docs hadoop-ozone/dist |
   | +1 | findbugs | 162 | the patch passed |
   | +1 | javadoc | 160 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 95 | common in the patch passed. |
   | -1 | unit | 75 | container-service in the patch failed. |
   | +1 | unit | 35 | docs in the patch passed. |
   | +1 | unit | 38 | dist in the patch passed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6769 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/792 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  shellcheck  shelldocs  
yamllint  |
   | uname | Linux 19f7b5cadc95 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1673b0 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/2/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/2/testReport/ |
   | Max. process+thread count | 341 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/docs hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236234)
Time Spent: 40m  (was: 0.5h)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-02 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831550#comment-16831550
 ] 

Elek, Marton commented on HDDS-1458:


Thanks the patch [~eyang] the improvement. As of now I can't check it as it 
should be rebased (github pull requests have no such restrictions they can be 
checked from the original PR branch even if they are conflicted with trunk).

bq.  There is a problem with Ozone docker image is that it mounts ozone tarball 
in expanded form from dist/target directory. This prevents integration-test to 
reiterate on the same ozone binaries.

Can you please describe what is the problem exactly? I have some concerns to 
create a docker image with each build. It's time and space consuming. I believe 
it's more effective to use just the hadoop-runner image and mount the built 
artifact. I would like to understand the problem in more details to find an 
effective solution. Wouldn't be enough to use the compose files form the target 
folder?

ps: I didn't check the patch yet, as it's conflicting but in case of having 
multiple fundamental changes, can be better to commit it in multiple smaller 
parts (IMHO)

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=236225=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236225
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 02/May/19 11:04
Start Date: 02/May/19 11:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-488632184
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1140 | trunk passed |
   | +1 | compile | 1041 | trunk passed |
   | +1 | checkstyle | 147 | trunk passed |
   | +1 | mvnsite | 223 | trunk passed |
   | +1 | shadedclient | 757 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdds/docs hadoop-ozone/dist |
   | +1 | findbugs | 152 | trunk passed |
   | +1 | javadoc | 155 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | -1 | mvninstall | 21 | dist in the patch failed. |
   | +1 | compile | 939 | the patch passed |
   | +1 | javac | 939 | the patch passed |
   | +1 | checkstyle | 147 | the patch passed |
   | +1 | mvnsite | 176 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 33 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdds/docs hadoop-ozone/dist |
   | +1 | findbugs | 163 | the patch passed |
   | +1 | javadoc | 159 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 80 | common in the patch passed. |
   | -1 | unit | 71 | container-service in the patch failed. |
   | +1 | unit | 34 | docs in the patch passed. |
   | +1 | unit | 38 | dist in the patch passed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6810 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/792 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  shellcheck  shelldocs  
yamllint  |
   | uname | Linux 2b967b1fd8c3 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f682a17 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/1/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/docs hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236225)
Time Spent: 0.5h  (was: 20m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: 

[jira] [Commented] (HDDS-1469) Generate default configuration fragments based on annotations

2019-05-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831536#comment-16831536
 ] 

Hudson commented on HDDS-1469:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16490 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16490/])
HDDS-1469. Generate default configuration fragments based on annotations (elek: 
rev e2f0f7267791051b561a6e291a22bbc58c34d068)
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/ConfigGroup.java
* (add) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigTag.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/SimpleConfiguration.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/ConfigType.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (add) hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/Config.java
* (add) hadoop-ozone/integration-test/src/test/resources/hdfs-site.xml
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/Config.java
* (add) 
hadoop-hdds/config/src/test/java/org/apache/hadoop/hdds/conf/TestConfigFileAppender.java
* (add) hadoop-hdds/config/pom.xml
* (add) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigurationException.java
* (edit) hadoop-hdds/pom.xml
* (add) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
* (add) hadoop-ozone/integration-test/src/test/resources/core-site.xml
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/ConfigurationException.java
* (add) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/package-info.java
* (add) 
hadoop-hdds/config/src/test/java/org/apache/hadoop/hdds/conf/package-info.java
* (edit) hadoop-hdds/common/pom.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
* (add) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileAppender.java
* (add) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigGroup.java
* (add) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigType.java
* (add) 
hadoop-hdds/config/src/main/resources/META-INF/services/javax.annotation.processing.Processor
* (add) 
hadoop-hdds/config/src/test/java/org/apache/hadoop/hdds/conf/ConfigurationExample.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java


> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1469) Generate default configuration fragments based on annotations

2019-05-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1469:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=236209=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236209
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 02/May/19 10:19
Start Date: 02/May/19 10:19
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236209)
Time Spent: 5h  (was: 4h 50m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=236208=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236208
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 02/May/19 10:18
Start Date: 02/May/19 10:18
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #773: HDDS-1469. Generate 
default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#issuecomment-488621848
 
 
   Thanks the review @anuengineer I am merging it right now.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236208)
Time Spent: 4h 50m  (was: 4h 40m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1479) Update S3.md documentation

2019-05-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831531#comment-16831531
 ] 

Hudson commented on HDDS-1479:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16489 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16489/])
HDDS-1479. Update S3.md documentation (elek: rev 
3cb1d09b2eb5f75e91b1a90986845f639bc68487)
* (edit) hadoop-hdds/docs/content/S3.md


> Update S3.md documentation
> --
>
> Key: HDDS-1479
> URL: https://issues.apache.org/jira/browse/HDDS-1479
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-791 implemented range get operation.
>  
> S3.md documentation has below line: 
> GET Object | implemented | Range headers are not supported
>  
> This should be updated to remove the part `Range headers are not supported`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1412) Provide example k8s deployment files as part of the release package

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1412?focusedWorklogId=236205=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236205
 ]

ASF GitHub Bot logged work on HDDS-1412:


Author: ASF GitHub Bot
Created on: 02/May/19 10:11
Start Date: 02/May/19 10:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #719: HDDS-1412. 
Provide example k8s deployment files as part of the release package
URL: https://github.com/apache/hadoop/pull/719#issuecomment-488620249
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 512 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1041 | trunk passed |
   | +1 | compile | 120 | trunk passed |
   | +1 | mvnsite | 93 | trunk passed |
   | +1 | shadedclient | 670 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | -1 | mvninstall | 18 | dist in the patch failed. |
   | +1 | compile | 106 | the patch passed |
   | +1 | javac | 106 | the patch passed |
   | +1 | hadolint | 0 | There were no new hadolint issues. |
   | +1 | mvnsite | 53 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 16 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 702 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 55 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 39 | common in the patch passed. |
   | +1 | unit | 24 | dist in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3761 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-719/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/719 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  hadolint  shellcheck  shelldocs  yamllint  |
   | uname | Linux 2107fdc79bba 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f682a17 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-719/3/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-719/3/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-719/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236205)
Time Spent: 2h  (was: 1h 50m)

> Provide example k8s deployment files as part of the release package
> ---
>
> Key: HDDS-1412
> URL: https://issues.apache.org/jira/browse/HDDS-1412
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In HDDS-872 we added Dockerfile and skaffold definition to run dev builds on 
> 

[jira] [Commented] (HDDS-1468) Inject configuration values to Java objects

2019-05-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831525#comment-16831525
 ] 

Hudson commented on HDDS-1468:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16488 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16488/])
HDDS-1468. Inject configuration values to Java objects (elek: rev 
a2887f5c23a695e74bb7693207e9240c8b94d8cf)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/safemode/TestSafeModeHandler.java
* (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/Config.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/ConfigGroup.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/SimpleConfiguration.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/ConfigType.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/ConfigurationException.java


> Inject configuration values to Java objects
> ---
>
> Key: HDDS-1468
> URL: https://issues.apache.org/jira/browse/HDDS-1468
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> According to the design doc in the parent issue we would like to support java 
> configuration objects which are simple POJO but the fields/setters are 
> annotated. As a first step we can introduce the 
> OzoneConfiguration.getConfigObject() api which can create the config object 
> and inject configuration.
> Later we can improve it with annotation processor which can generate the 
> ozone-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1479) Update S3.md documentation

2019-05-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1479?focusedWorklogId=236193=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236193
 ]

ASF GitHub Bot logged work on HDDS-1479:


Author: ASF GitHub Bot
Created on: 02/May/19 09:51
Start Date: 02/May/19 09:51
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #791: HDDS-1479. Update 
S3.md documentation
URL: https://github.com/apache/hadoop/pull/791
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 236193)
Time Spent: 50m  (was: 40m)

> Update S3.md documentation
> --
>
> Key: HDDS-1479
> URL: https://issues.apache.org/jira/browse/HDDS-1479
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-791 implemented range get operation.
>  
> S3.md documentation has below line: 
> GET Object | implemented | Range headers are not supported
>  
> This should be updated to remove the part `Range headers are not supported`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1479) Update S3.md documentation

2019-05-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton resolved HDDS-1479.

   Resolution: Fixed
Fix Version/s: 0.5.0

> Update S3.md documentation
> --
>
> Key: HDDS-1479
> URL: https://issues.apache.org/jira/browse/HDDS-1479
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HDDS-791 implemented range get operation.
>  
> S3.md documentation has below line: 
> GET Object | implemented | Range headers are not supported
>  
> This should be updated to remove the part `Range headers are not supported`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >