[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684539#comment-16684539
 ] 

Chao Sun commented on HDFS-14067:
-

Thanks [~xkrogen] and [~shv]!

bq. but is the current state of each NN tracked somewhere that could become 
confused if a standby suddenly appears or disappears because of a manual 
transition to/from observer?

Is the concern about that the states could be cached somewhere? or potential 
conflicts between manual and auto failover, where a standby could be involved 
in both?

bq. Also, you sometimes use HAServiceState.STATE_NAME, and sometimes refer 
directly to the state name via the static import, can you use one or the other 
throughout the patch?

Sure will fix. 

bq. We should wait for HDFS-14035 here, since it adds 
ClientProtocol.getHAServiceState(), which should be used here instead of 
HAServiceProtocol. Otherwise we will have the same problems with delegation 
token as in HDFS-14035.

Hmm why we should use {{ClientProtocol.getHAServiceState()}}? we are already 
calling {{HAServiceProtocol}} methods in the state transition, so whoever calls 
it should already be authenticated, is that correct?


> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684562#comment-16684562
 ] 

Hadoop QA commented on HDDS-819:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
59s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947917/HDDS-819.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a18e0a924b36 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b6d4e19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1692/testReport/ |
| Max. process+thread count | 2628 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1692/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
>  

[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684569#comment-16684569
 ] 

CR Hota commented on HDFS-14070:


[~elgoiri]   [~brahmareddy]

Could you help review and commit this. Router will extend the new methods and 
have its own implementation w.r.t webhdfs token management.

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-831) TestOzoneShell in integration-test is flaky

2018-11-12 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-831:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk.
Thanks [~nandakumar131] for fixing this.

> TestOzoneShell in integration-test is flaky
> ---
>
> Key: HDDS-831
> URL: https://issues.apache.org/jira/browse/HDDS-831
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-831.000.patch
>
>
> TestOzoneShell in integration-test is flaky, fails in few Jenkins runs.
> https://builds.apache.org/job/PreCommit-HDDS-Build/1685/artifact/out/patch-unit-hadoop-ozone_integration-test.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-825) Code cleanup based on messages from ErrorProne

2018-11-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684643#comment-16684643
 ] 

Hanisha Koneru commented on HDDS-825:
-

Thanks [~anu] for cleaning up the code base.
LGTM overall. 
{{TestOzoneVolumes#testGetVolumesOfAnotherUserShouldFail}} is failing locally 
too for me. The other two tests are passing locally.
+1 with that addressed (This particular test was not running before, so I think 
we can skip enabling it in this patch and fix it later).

> Code cleanup based on messages from ErrorProne
> --
>
> Key: HDDS-825
> URL: https://issues.apache.org/jira/browse/HDDS-825
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-825.001.patch, HDDS-825.002.patch, 
> HDDS-825.003.patch
>
>
> I ran ErrorProne (http://errorprone.info/) on Ozone/HDDS code base and it 
> threw lots of errors. This patch fixes many issues pointed out by ErrorProne.
> The main classes of errors fixed in this patch are:
> * http://errorprone.info/bugpattern/DefaultCharset
> * http://errorprone.info/bugpattern/ComparableType
> * http://errorprone.info/bugpattern/StringSplitter
> * http://errorprone.info/bugpattern/IntLongMath
> * http://errorprone.info/bugpattern/JavaLangClash
> * http://errorprone.info/bugpattern/CatchFail
> * http://errorprone.info/bugpattern/JdkObsolete
> * http://errorprone.info/bugpattern/AssertEqualsArgumentOrderChecker
> * http://errorprone.info/bugpattern/CatchAndPrintStackTrace
> It is pretty educative to read through these errors and see the mistakes we 
> made.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684658#comment-16684658
 ] 

Hadoop QA commented on HDFS-14017:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
41s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 10 new + 3 unchanged - 3 fixed = 13 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14017 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947928/HDFS-14017-HDFS-12943.009.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 614a7ac5efb0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 8b5277f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25497/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25497/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDDS-576) Move ContainerWithPipeline creation to RPC endpoint

2018-11-12 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684659#comment-16684659
 ] 

Yiqun Lin commented on HDDS-576:


[~nandakumar131], as we have removed ALLOCATED, CREATING State for containers. 
We can reword the javadoc of {{ContainerStateManager}} in a separate jira.
{noformat}
 * This is how a create container happens: 1. When a container is created, the
 * Server(or SCM) marks that Container as ALLOCATED state. In this state, SCM
 * has chosen a pipeline for container to live on. However, the container is not
 * created yet. This container along with the pipeline is returned to the
 * client.
 * 
 * 2. The client when it sees the Container state as ALLOCATED understands that
 * container needs to be created on the specified pipeline. The client lets the
 * SCM know that saw this flag and is initiating the on the data nodes.
 * 

{noformat}

> Move ContainerWithPipeline creation to RPC endpoint
> ---
>
> Key: HDDS-576
> URL: https://issues.apache.org/jira/browse/HDDS-576
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-576.000.patch
>
>
> With independent Pipeline and Container Managers in SCM, the creation of 
> ContainerWithPipeline can be moved to RPC endpoint. This will ensure clear 
> separation of the pipeline Manager and Container Manager



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 *** Easier initial implementation
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 *** More difficult to add debugging information later on
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.


> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  *** Easier initial implementation
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  *** More difficult to add debugging information later on
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-832) Docs folder is missing from the Ozone distribution package

2018-11-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684474#comment-16684474
 ] 

Anu Engineer commented on HDDS-832:
---

+1. I will commit this shortly.

> Docs folder is missing from the Ozone distribution package
> --
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-832-ozone-0.3.001.patch
>
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-14065:
-
Hadoop Flags: Reviewed

> Failed Storage Locations shows nothing in the Datanode Volume Failures
> --
>
> Key: HDFS-14065
> URL: https://issues.apache.org/jira/browse/HDFS-14065
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch
>
>
> The failed storage locations in the *DataNode Volume Failure* UI shows 
> nothing. Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-14065:
-
   Resolution: Fixed
Fix Version/s: 3.2.1
   3.3.0
   3.1.2
   3.0.4
   Status: Resolved  (was: Patch Available)

+1 I've committed this.

Thanks for reporting and fixing this [~ayushtkn].

> Failed Storage Locations shows nothing in the Datanode Volume Failures
> --
>
> Key: HDFS-14065
> URL: https://issues.apache.org/jira/browse/HDFS-14065
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch
>
>
> The failed storage locations in the *DataNode Volume Failure* UI shows 
> nothing. Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684545#comment-16684545
 ] 

Erik Krogen commented on HDFS-14067:


{quote}
Is the concern about that the states could be cached somewhere? or potential 
conflicts between manual and auto failover, where a standby could be involved 
in both?
{quote}
I think more along the latter. Maybe let me rephrase my question: for what 
reason are manual transitions between active and standby disallowed, and what 
is different about the standby/observer transition that makes it allowed? 
Intuitively it makes sense, but we should be careful about any assumptions that 
we might break.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)
CR Hota created HDFS-14070:
--

 Summary: Refactor NameNodeWebHdfsMethods to allow better 
extensibility
 Key: HDFS-14070
 URL: https://issues.apache.org/jira/browse/HDFS-14070
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: CR Hota
Assignee: CR Hota


Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
cancelDelegationToken and generateDelegationTokens should be extensible. Router 
can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684573#comment-16684573
 ] 

Jitendra Nath Pandey commented on HDDS-675:
---

# The purpose of {{overWriteFlag}} in {{ChunkOutputStream}} is not clear to me. 
Are you using it for a retry upon an exception? Why wouldn't it work if we just 
rely on {{lastSuccessfulFlushIndex}}?
 # Default {{watch.request.timeout}} of 5 seconds is too aggressive, we should 
make it at least 30 seconds.
 # Change in {{XceiverClientManager}} seems unnecessary. If it is a cleanup, we 
should rather do it in a separate jira, because if Ratis is not relevant in 
this class, that should be removed as well.

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, HDDS-675.005.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684633#comment-16684633
 ] 

CR Hota commented on HDFS-14070:


[~elgoiri] Thanks for reviewing.

In RouterWebHDFSMethods (which extends NamenodeWebHdfsMethods), I plan to 
override three methods i.e. getDelegationToken, cancelDelegationToken and 
renewDelegationToken. In the override, we can use RouterRpcServer instead of 
namenoderpcserver. With this refactoring namenode's webhdfs can continue to use 
namenoderpcserver as that now becomes implementation detail instead of earlier 
dependency on Namenode as an input parameter. We can this way re-use a lot of 
current name node code.

 

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14048) DFSOutputStream close() throws exception on subsequent call after DataNode restart

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14048:
-
Fix Version/s: (was: 2.9.2)
   2.9.3

> DFSOutputStream close() throws exception on subsequent call after DataNode 
> restart
> --
>
> Key: HDFS-14048
> URL: https://issues.apache.org/jira/browse/HDFS-14048
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.3.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3
>
> Attachments: HDFS-14048-branch-2.000.patch, HDFS-14048.000.patch
>
>
> We recently discovered an issue in which, during a rolling upgrade, some jobs 
> were failing with exceptions like (sadly this is the whole stack trace):
> {code}
> java.io.IOException: A datanode is restarting: 
> DatanodeInfoWithStorage[1.1.1.1:71,BP-,DISK]
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:877)
> {code}
> with an earlier statement in the log like:
> {code}
> INFO [main] org.apache.hadoop.hdfs.DFSClient: A datanode is restarting: 
> DatanodeInfoWithStorage[1.1.1.1:71,BP-,DISK]
> {code}
> Strangely we did not see any other logs about the {{DFSOutputStream}} failing 
> after waiting for the DataNode restart. We eventually realized that in some 
> cases {{DFSOutputStream#close()}} may be called more than once, and that if 
> so, the {{IOException}} above is thrown on the _second_ call to {{close()}} 
> (this is even with HDFS-5335; prior to this it would have been thrown on all 
> calls to {{close()}} besides the first).
> The problem is that in {{DataStreamer#createBlockOutputStream()}}, after the 
> new output stream is created, it resets the error states:
> {code}
> errorState.resetInternalError();
> // remove all restarting nodes from failed nodes list
> failed.removeAll(restartingNodes);
> restartingNodes.clear(); 
> {code}
> But it forgets to clear {{lastException}}. When 
> {{DFSOutputStream#closeImpl()}} is called a second time, this block is 
> triggered:
> {code}
> if (isClosed()) {
>   LOG.debug("Closing an already closed stream. [Stream:{}, streamer:{}]",
>   closed, getStreamer().streamerClosed());
>   try {
> getStreamer().getLastException().check(true);
> {code}
> The second time, {{isClosed()}} is true, so the exception checking occurs and 
> the "Datanode is restarting" exception is thrown even though the stream has 
> already been successfully closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-832) Docs folder is missing from the Ozone distribution package

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684604#comment-16684604
 ] 

Hadoop QA commented on HDDS-832:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} ozone-0.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
45s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
30s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
43s{color} | {color:green} ozone-0.3 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m  
9s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
52s{color} | {color:green} ozone-0.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
19s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 33s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-832 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947910/HDDS-832-ozone-0.3.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  shadedclient  xml  |
| uname | Linux 13363fda069b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | ozone-0.3 / 612236b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| whitespace | 

[jira] [Updated] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14017:
--
Attachment: HDFS-14017-HDFS-12943.009.patch

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in 
> current IPFailover setup, say we have a virtual host nn.xyz.com, which points 
> to either of two physical nodes nn1.xyz.com or nn2.xyz.com. In current HDFS, 
> there is always only one DT being exchanged, which has hostname nn.xyz.com. 
> Server only issues this DT, and client only knows the host nn.xyz.com, so all 
> is good. But in Observer read, even with IPFailover, the client will no 
> longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and 
> nn2.xyz.com. During this process, current code will look for DT associated 
> with hostname nn1.xyz.com or nn2.xyz.com, which is different from the DT 
> given by NN. causing Token authentication to fail. This happens in 
> {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy 
> provider will need to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684680#comment-16684680
 ] 

Hadoop QA commented on HDFS-14070:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14070 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947924/HDFS-14070.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d6deccf561fa 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b6d4e19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25496/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25496/testReport/ |
| Max. process+thread count | 2981 (vs. ulimit of 1) |
| modules | 

[jira] [Commented] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684743#comment-16684743
 ] 

Shashikant Banerjee commented on HDDS-675:
--

Thanks [~jnp], for the review. patch v6 addresses the comments as per our 
discussion.

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, 
> HDDS-675.005.patch, HDDS-675.006.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-675:
-
Attachment: HDDS-675.006.patch

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, 
> HDDS-675.005.patch, HDDS-675.006.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-825) Code cleanup based on messages from ErrorProne

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684586#comment-16684586
 ] 

Hadoop QA commented on HDDS-825:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 49 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
 6s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 18 
fixed = 0 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} framework in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | 

[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684614#comment-16684614
 ] 

Chen Liang commented on HDFS-14017:
---

Post v009 patch.

Had some offline discussion with [~shv] and [~xkrogen]. The main point of v009 
patch is that is mainly to not to use all the configured physical address, but 
only physical addresses of one arbitrary name services. If there is only one 
though, there is no difference. Ideally we would like to resolve the 
inconsistency of virtual IP and name services, and behave more reasonable in 
federation, we still needs to come up with a plan for that. This is only meant 
to be the current temporary solution.

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in 
> current IPFailover setup, say we have a virtual host nn.xyz.com, which points 
> to either of two physical nodes nn1.xyz.com or nn2.xyz.com. In current HDFS, 
> there is always only one DT being exchanged, which has hostname nn.xyz.com. 
> Server only issues this DT, and client only knows the host nn.xyz.com, so all 
> is good. But in Observer read, even with IPFailover, the client will no 
> longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and 
> nn2.xyz.com. During this process, current code will look for DT associated 
> with hostname nn1.xyz.com or nn2.xyz.com, which is different from the DT 
> given by NN. causing Token authentication to fail. This happens in 
> {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy 
> provider will need to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684613#comment-16684613
 ] 

Hadoop QA commented on HDFS-14035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
14s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
43s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} hadoop-hdfs-project: The patch generated 27 new 
+ 183 unchanged - 0 fixed = 210 total (was 183) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 20s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
47s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684621#comment-16684621
 ] 

Chen Liang commented on HDFS-14035:
---

I ran the tests locally, none of TestEditLogTailer, TestNamenodeCapacityReport 
or TestBPOfferService failed. The failed CTEST are irrelevant.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684621#comment-16684621
 ] 

Chen Liang edited comment on HDFS-14035 at 11/13/18 2:06 AM:
-

I ran the tests locally, none of TestEditLogTailer, TestNamenodeCapacityReport 
or TestBPOfferService failed. The failed CTEST are unrelated.


was (Author: vagarychen):
I ran the tests locally, none of TestEditLogTailer, TestNamenodeCapacityReport 
or TestBPOfferService failed. The failed CTEST are irrelevant.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-819:

Attachment: HDDS-819.003.patch

> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-819.001.patch, HDDS-819.002.patch, 
> HDDS-819.003.patch
>
>
> To match the behavior of o3fs with that of the S3AFileSystem, following 
> changes need to be made to OzoneFileSystem.
>  # When creating files, we should add only 1 key. Keys corresponding to the 
> parent directories should not be created.
>  # {{GetFileStatus}} should return the status for fake directories 
> (directories which do not actually exist as a key but there exists a key 
> which is a child of this directory). For example, if there exists a key 
> _/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
> directory.
>  # {{ListStatus}} on a directory should list fake sub-directories also along 
> with files.
>  # {\{ListStatus}} on a directory should also files and sub-directories with 
> the same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-819:

Attachment: HDDS-819.003.patch

> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-819.001.patch, HDDS-819.002.patch, 
> HDDS-819.003.patch
>
>
> To match the behavior of o3fs with that of the S3AFileSystem, following 
> changes need to be made to OzoneFileSystem.
>  # When creating files, we should add only 1 key. Keys corresponding to the 
> parent directories should not be created.
>  # {{GetFileStatus}} should return the status for fake directories 
> (directories which do not actually exist as a key but there exists a key 
> which is a child of this directory). For example, if there exists a key 
> _/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
> directory.
>  # {{ListStatus}} on a directory should list fake sub-directories also along 
> with files.
>  # {\{ListStatus}} on a directory should also files and sub-directories with 
> the same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 ** Disadvantages:
 *** 
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.


> Better debuggability for datanode decomissioning
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684494#comment-16684494
 ] 

Hudson commented on HDFS-14065:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15412 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15412/])
HDFS-14065. Failed Storage Locations shows nothing in the Datanode (arp: rev 
b6d4e19f34f474ea8068ebb374f55e0db2f714da)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> Failed Storage Locations shows nothing in the Datanode Volume Failures
> --
>
> Key: HDFS-14065
> URL: https://issues.apache.org/jira/browse/HDFS-14065
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch
>
>
> The failed storage locations in the *DataNode Volume Failure* UI shows 
> nothing. Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684582#comment-16684582
 ] 

Íñigo Goiri commented on HDFS-14070:


Thanks [~crh] for the patch.
This looks reasonable; for more context, can you specify which methods you 
would override and how?

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-12 Thread Jiandan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684619#comment-16684619
 ] 

Jiandan Yang  commented on HDFS-14045:
--

Hi, [~xkrogen] 
I have updated patch according to your review comments, please help reviewing 
again.

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2018-11-12 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684637#comment-16684637
 ] 

Weiwei Yang commented on HDFS-6874:
---

Hi [~elgoiri]

I have corrected the logging, and removed GET_BLOCK_LOCATIONS from 
HttpFSParametersProvider in v10 patch. The httpfs only needs to support 
GETFILEBLOCKLOCATIONS API. Pls take a look, thanks.

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.02.patch, HDFS-6874.03.patch, HDFS-6874.04.patch, 
> HDFS-6874.05.patch, HDFS-6874.06.patch, HDFS-6874.07.patch, 
> HDFS-6874.08.patch, HDFS-6874.09.patch, HDFS-6874.10.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-11-12 Thread yanghuafeng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yanghuafeng updated HDFS-13852:
---
Attachment: HDFS-13852-HDFS-13891.0.patch

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852-HDFS-13891.0.patch, HDFS-13852.001.patch, 
> HDFS-13852.002.patch, HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684740#comment-16684740
 ] 

Brahma Reddy Battula commented on HDFS-14065:
-

Linking the broken jira. Nice Catch [~ayushtkn]..

> Failed Storage Locations shows nothing in the Datanode Volume Failures
> --
>
> Key: HDFS-14065
> URL: https://issues.apache.org/jira/browse/HDFS-14065
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch
>
>
> The failed storage locations in the *DataNode Volume Failure* UI shows 
> nothing. Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-11-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684759#comment-16684759
 ] 

Akira Ajisaka commented on HDFS-13852:
--

The test failure is related to HADOOP-15916 and it should be backported to 
HDFS-13891 branch.

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852-HDFS-13891.0.patch, HDFS-13852.001.patch, 
> HDFS-13852.002.patch, HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684478#comment-16684478
 ] 

Hanisha Koneru commented on HDDS-819:
-

Thank you [~arpitagarwal] for the review. I have updated the patch to replace 
ListStatusIterator#subDirPaths and added javadocs for the new functions.

> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-819.001.patch, HDDS-819.002.patch, 
> HDDS-819.003.patch
>
>
> To match the behavior of o3fs with that of the S3AFileSystem, following 
> changes need to be made to OzoneFileSystem.
>  # When creating files, we should add only 1 key. Keys corresponding to the 
> parent directories should not be created.
>  # {{GetFileStatus}} should return the status for fake directories 
> (directories which do not actually exist as a key but there exists a key 
> which is a child of this directory). For example, if there exists a key 
> _/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
> directory.
>  # {{ListStatus}} on a directory should list fake sub-directories also along 
> with files.
>  # {\{ListStatus}} on a directory should also files and sub-directories with 
> the same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we don't provide any debugging info for decommissioning DN, it is 
difficult to determine which blocks are on their last replica. We have two 
design options:
 # Add block info for blocks with low replication (configurable)
 ** Advantages:
 *** Initial debugging information would be more thorough
 *** Easier initial implementation
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 *** More difficult to add debugging information later on
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 *** Easier initial implementation
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 *** More difficult to add debugging information later on
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.


> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we don't provide any debugging info for decommissioning DN, it is 
> difficult to determine which blocks are on their last replica. We have two 
> design options:
>  # Add block info for blocks with low replication (configurable)
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  *** Easier initial implementation
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  *** More difficult to add debugging information later on
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684555#comment-16684555
 ] 

Hadoop QA commented on HDFS-14067:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
55s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
59s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
26s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}104m 
55s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}222m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947893/HDFS-14067-HDFS-12943.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 295749d2ed5c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 8b5277f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25492/testReport/ |
| Max. process+thread count | 

[jira] [Commented] (HDFS-14063) Support noredirect param for CREATE/APPEND/OPEN in HttpFS

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684522#comment-16684522
 ] 

Hadoop QA commented on HDFS-14063:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 22 new + 312 unchanged - 4 fixed = 334 total (was 316) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14063 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947913/HDFS-14063.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b9e77234f487 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e269c3f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25494/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25494/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25494/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
| 

[jira] [Updated] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14070:
---
Attachment: HDFS-14070.001.patch
Status: Patch Available  (was: Open)

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684580#comment-16684580
 ] 

Hadoop QA commented on HDDS-675:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 15s{color} | {color:orange} root: The patch generated 8 new + 17 unchanged - 
1 fixed = 25 total (was 18) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} objectstore-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Commented] (HDDS-831) TestOzoneShell in integration-test is flaky

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684649#comment-16684649
 ] 

Hudson commented on HDDS-831:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15414 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15414/])
HDDS-831. TestOzoneShell in integration-test is flaky. Contributed by (yqlin: 
rev f8713f8adea9d69330933a2cde594ed11ed9520c)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java


> TestOzoneShell in integration-test is flaky
> ---
>
> Key: HDDS-831
> URL: https://issues.apache.org/jira/browse/HDDS-831
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-831.000.patch
>
>
> TestOzoneShell in integration-test is flaky, fails in few Jenkins runs.
> https://builds.apache.org/job/PreCommit-HDDS-Build/1685/artifact/out/patch-unit-hadoop-ozone_integration-test.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-734) Remove create container logic from OzoneClient

2018-11-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-734.
--
Resolution: Duplicate

This issue has been fixed via HDDS-733.

> Remove create container logic from OzoneClient
> --
>
> Key: HDDS-734
> URL: https://issues.apache.org/jira/browse/HDDS-734
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> After HDDS-733, the container will be created as part of the first chunk 
> write, we don't need explicit container creation code in {{OzoneClient}} 
> anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684527#comment-16684527
 ] 

Hadoop QA commented on HDFS-14069:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 649 unchanged - 0 fixed = 657 total (was 649) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
39s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947912/HDFS-14069.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux ab208045e22f 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e269c3f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25495/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 

[jira] [Commented] (HDDS-831) TestOzoneShell in integration-test is flaky

2018-11-12 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684640#comment-16684640
 ] 

Yiqun Lin commented on HDDS-831:


Good catch! LGTM, +1.
Committing this.

> TestOzoneShell in integration-test is flaky
> ---
>
> Key: HDDS-831
> URL: https://issues.apache.org/jira/browse/HDDS-831
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-831.000.patch
>
>
> TestOzoneShell in integration-test is flaky, fails in few Jenkins runs.
> https://builds.apache.org/job/PreCommit-HDDS-Build/1685/artifact/out/patch-unit-hadoop-ozone_integration-test.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684701#comment-16684701
 ] 

CR Hota commented on HDFS-14070:


The test failures are unrelated to this change.

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-735) Remove ALLOCATED and CREATING state from ContainerStateManager

2018-11-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-735.
--
Resolution: Duplicate

This issue has been fixed via HDDS-733.

> Remove ALLOCATED and CREATING state from ContainerStateManager
> --
>
> Key: HDDS-735
> URL: https://issues.apache.org/jira/browse/HDDS-735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
>
> After HDDS-733 and HDDS-734, we don't need ALLOCATED and CREATING state for 
> containers in SCM. The container will move to OPEN state as soon as it is 
> allocated in SCM. Since the container creation happens as part of the first 
> chunk write and container creation operation in datanode idempotent we don't 
> have to worry about giving out the same container to multiple clients as soon 
> as it is allocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684764#comment-16684764
 ] 

Brahma Reddy Battula commented on HDFS-14070:
-

bq.Router will extend the new methods and have its own implementation w.r.t 
webhdfs token management.

Yes,this refactor required.Thanks for reporting.

+1 on HDFS-14070.001.patch.

Will commit.

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14063) Support noredirect param for CREATE/APPEND/OPEN in HttpFS

2018-11-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684467#comment-16684467
 ] 

Íñigo Goiri commented on HDFS-14063:


[^HDFS-14063.002.patch] adds a unit test.
I cannot reproduce the failed unit tests right now, let's see what Yetus says 
this time.

> Support noredirect param for CREATE/APPEND/OPEN in HttpFS
> -
>
> Key: HDFS-14063
> URL: https://issues.apache.org/jira/browse/HDFS-14063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14063.000.patch, HDFS-14063.001.patch, 
> HDFS-14063.002.patch
>
>
> Currently HttpFS always redirects the URI. However, the WebUI uses 
> noredirect=true which means it only wants a response with the location. This 
> is properly done in {{NamenodeWebHDFSMethods}}. HttpFS should do the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-819:

Attachment: (was: HDDS-819.003.patch)

> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-819.001.patch, HDDS-819.002.patch
>
>
> To match the behavior of o3fs with that of the S3AFileSystem, following 
> changes need to be made to OzoneFileSystem.
>  # When creating files, we should add only 1 key. Keys corresponding to the 
> parent directories should not be created.
>  # {{GetFileStatus}} should return the status for fake directories 
> (directories which do not actually exist as a key but there exists a key 
> which is a child of this directory). For example, if there exists a key 
> _/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
> directory.
>  # {{ListStatus}} on a directory should list fake sub-directories also along 
> with files.
>  # {\{ListStatus}} on a directory should also files and sub-directories with 
> the same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14069:
---
Assignee: Danny Becker
  Status: Patch Available  (was: Open)

> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  *** Easier initial implementation
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  *** More difficult to add debugging information later on
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-832) Docs folder is missing from the Ozone distribution package

2018-11-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684480#comment-16684480
 ] 

Anu Engineer commented on HDDS-832:
---

I have committed this to ozone-0.3. I will leave it to you if you want to bring 
this in into trunk. Thanks for so quickly fixing it.

> Docs folder is missing from the Ozone distribution package
> --
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-832-ozone-0.3.001.patch
>
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684696#comment-16684696
 ] 

Hadoop QA commented on HDFS-13852:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
14s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
56s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
14s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947935/HDFS-13852-HDFS-13891.0.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1d80e4bd901c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / f311303 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25498/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25498/testReport/ |
| Max. process+thread count | 99 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Created] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14065:
---

 Summary: Failed Storage Locations shows nothing in the Datanode 
Volume Failures
 Key: HDFS-14065
 URL: https://issues.apache.org/jira/browse/HDFS-14065
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


The failed storage locations in the *DataNode Volume Failure* UI shows nothing. 
Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-818) OzoneConfiguration uses an existing XMLRoot value

2018-11-12 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683379#comment-16683379
 ] 

Elek, Marton commented on HDDS-818:
---

Thanks [~giovanni.fumarola] to file in this issue. While the patch seems to be 
safe for me I would like to understand what the problem is exactly. Especially 
to avoid problems in the future (and do some testing if possible).

bq. How to reproduce? Using any REST client tool call 
ws/v1/cluster/scheduler-conf from select XML as return format.{quote}

I will try it out, but unfortunately It's not clear for me how can I test it 
exactly (this is my limitation, I am not familiar enough with yarn). I guess it 
could be a GET call without any parameter on the resource manager, hopefully 
with the default settings. Is it true? 

As I understood this is a name collision on JAXB level but I don't understand 
why Yarn has hdds-common jar files on the classpath. Or is it a test with 
ozonefs + jar? In that case we can create a robot  test to avoid this issue 
(not this jira, but long term).

Could you please helm me to understand the issue in more details?



> OzoneConfiguration uses an existing XMLRoot value
> -
>
> Key: HDDS-818
> URL: https://issues.apache.org/jira/browse/HDDS-818
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HDDS-818.v0.patch
>
>
> OzoneConfiguration and ConfInfo have 
> @XmlRootElement(name = "configuration")
> This makes REST Client crash for XML calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-11-12 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683490#comment-16683490
 ] 

Elek, Marton commented on HDDS-767:
---

+1. Thanks [~dineshchitlangia], it looks good to me. 

Will commit it soon.

> OM should not search for STDOUT root logger for audit logging
> -
>
> Key: HDDS-767
> URL: https://issues.apache.org/jira/browse/HDDS-767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
> Attachments: HDDS-767.001.patch
>
>
> When we start ozone, the .out file shows the following line:
> {noformat}
> 2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
> logger config "root"{noformat}
> This is because the console appender has been disabled by default however 
> incorrect log4j2 config is still trying to find the console appender.
>  
> This Jira aims to comment the following config lines to avoid this issue:
> {code:java}
> rootLogger.appenderRefs=stdout
> rootLogger.appenderRef.stdout.ref=STDOUT
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-508) Add robot framework to the apache/hadoop-runner baseimage

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-508:
--
Fix Version/s: (was: 0.3.0)

> Add robot framework to the apache/hadoop-runner baseimage
> -
>
> Key: HDDS-508
> URL: https://issues.apache.org/jira/browse/HDDS-508
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2, newbie
>
> In HDDS-352 we moved the acceptance tests to the dist folder. Currently the 
> framework is not part of the base image we need to install it all the time.
> See the following lines in the 
> [test.sh|https://github.com/apache/hadoop/blob/trunk/hadoop-dist/src/main/smoketest/test.sh]:
> {code}
> docker-compose -f "$COMPOSE_FILE" exec datanode sudo apt-get update
> docker-compose -f "$COMPOSE_FILE" exec datanode sudo apt-get install -y 
> python-pip
> docker-compose -f "$COMPOSE_FILE" exec datanode sudo pip install 
> robotframework
> {code}
> This could be removed after we add these lines to the [docker 
> file|https://github.com/apache/hadoop/blob/docker-hadoop-runner/Dockerfile]:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-5) Enable OzoneManager kerberos auth

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-5:

Fix Version/s: (was: 0.3.0)
   0.4.0

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, HDDS-5-HDDS-4.01.patch, 
> HDDS-5-HDDS-4.02.patch, initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-14054) TestLeaseRecovery2: testHardLeaseRecoveryAfterNameNodeRestart2 and testHardLeaseRecoveryWithRenameAfterNameNodeRestart are flaky

2018-11-12 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14054 started by Zsolt Venczel.

> TestLeaseRecovery2: testHardLeaseRecoveryAfterNameNodeRestart2 and 
> testHardLeaseRecoveryWithRenameAfterNameNodeRestart are flaky
> 
>
> Key: HDFS-14054
> URL: https://issues.apache.org/jira/browse/HDFS-14054
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.3
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: flaky-test
>
> ---
>  T E S T S
> ---
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support 
> was removed in 8.0
> Running org.apache.hadoop.hdfs.TestLeaseRecovery2
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 68.971 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestLeaseRecovery2
> testHardLeaseRecoveryAfterNameNodeRestart2(org.apache.hadoop.hdfs.TestLeaseRecovery2)
>   Time elapsed: 4.375 sec  <<< FAILURE!
> java.lang.AssertionError: lease holder should now be the NN
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.checkLease(TestLeaseRecovery2.java:568)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:520)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:437)
> testHardLeaseRecoveryWithRenameAfterNameNodeRestart(org.apache.hadoop.hdfs.TestLeaseRecovery2)
>   Time elapsed: 4.339 sec  <<< FAILURE!
> java.lang.AssertionError: lease holder should now be the NN
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.checkLease(TestLeaseRecovery2.java:568)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:520)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:443)
> Results :
> Failed tests: 
>   
> TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2:437->hardLeaseRecoveryRestartHelper:520->checkLease:568
>  lease holder should now be the NN
>   
> TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:443->hardLeaseRecoveryRestartHelper:520->checkLease:568
>  lease holder should now be the NN
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-709) Modify Close Container handling sequence on datanodes

2018-11-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-709:
-
Status: Open  (was: Patch Available)

> Modify Close Container handling sequence on datanodes
> -
>
> Key: HDDS-709
> URL: https://issues.apache.org/jira/browse/HDDS-709
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-709.000.patch, HDDS-709.001.patch, 
> HDDS-709.002.patch, HDDS-709.003.patch, HDDS-709.004.patch, 
> HDDS-709.005.patch, HDDS-709.006.patch
>
>
> With quasi closed container state for handling majority node failures, the 
> close container handling sequence in Datanodes need to change. Once the 
> datanodes receive a close container command from SCM, the open container 
> replicas individually be marked in the closing state. In a closing state, 
> only the transactions coming from the Ratis leader  are allowed , all other 
> write transaction will fail. A close container transaction will be queued via 
> Ratis on the leader which will be replayed to the followers which makes it 
> transition to CLOSED/QUASI CLOSED state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-767:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> OM should not search for STDOUT root logger for audit logging
> -
>
> Key: HDDS-767
> URL: https://issues.apache.org/jira/browse/HDDS-767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
> Attachments: HDDS-767.001.patch
>
>
> When we start ozone, the .out file shows the following line:
> {noformat}
> 2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
> logger config "root"{noformat}
> This is because the console appender has been disabled by default however 
> incorrect log4j2 config is still trying to find the console appender.
>  
> This Jira aims to comment the following config lines to avoid this issue:
> {code:java}
> rootLogger.appenderRefs=stdout
> rootLogger.appenderRef.stdout.ref=STDOUT
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14060) HDFS fetchdt command to return error codes on success/failure

2018-11-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683553#comment-16683553
 ] 

Steve Loughran commented on HDFS-14060:
---

Thx for picking this up

I think it's best to have a simple exit code model of
* command failed with error: -1
* command didn't fail: 0

Most ambiguous is this: if an FS didn't issue a token, is that an error or not? 
I'd argue "yes, it's an error", but there's a risk of breaking workflows. Even 
changing the exit codes here has a bit of a risk



* trunk has some changes to its fetchdt for better testability (HDFS-13951); 
that's the one to work on. If you want this for earlier 3.x branches, that can 
be backported.
* if you actually want to see if the fetched DTs can be used, the latest 
version of https://github.com/steveloughran/cloudstore can take a token file 
{{-tokenfile }} and load it; if you log out of kerberos after collecting 
the token then it'll verify things worked.

> HDFS fetchdt command to return error codes on success/failure
> -
>
> Key: HDFS-14060
> URL: https://issues.apache.org/jira/browse/HDFS-14060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Kitti Nanasi
>Priority: Major
>
> The {{hdfs fetchdt}} command always returns 0, even when there's been an 
> error (no token issued, no file to load, usage, etc). This means its not that 
> useful as a command line tool for testing or in scripts.
> Proposed: exit non-zero for errors; reuse LaucherExitCodes for these



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-383) Ozone Client should discard preallocated blocks from closed containers

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-383:
--
Fix Version/s: (was: 0.3.0)
   0.2.1

> Ozone Client should discard preallocated blocks from closed containers
> --
>
> Key: HDDS-383
> URL: https://issues.apache.org/jira/browse/HDDS-383
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-383.00.patch, HDDS-383.01.patch, HDDS-383.02.patch, 
> HDDS-383.03.patch
>
>
> When key write happens in Ozone client, based on the initial size given, 
> preallocation of blocks happen. While write happens, containers can get 
> closed and if the remaining preallocated blocks  belong to closed containers 
> , they can be discarded right away instead of trying to write these blocks 
> and failing with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13998) ECAdmin NPE with -setPolicy -replicate

2018-11-12 Thread Zsolt Venczel (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683575#comment-16683575
 ] 

Zsolt Venczel commented on HDFS-13998:
--

Thank you [~brahmareddy] for taking a look!

Please find my comments below:
{quote}IMHO, HDFS-13732 change might not require..? As admin will be aware of 
configured policy and these are admin commands.
{quote}
For supportability reasons helping out administrators (there could be many) by 
displaying the actual outcome of their actions can be valuable.
 We support them by providing a warning message as well when the directory is 
not empty. I think this is also valuable despite its load (a listStatus command 
is executed that adds an extra audit log entry and might also return 1000 
FileStatus information by default if the directory is large enough).
{quote}Adding RPC can mislead

For concurrent calls and any error while getting the policy after setting.
{quote}
In this scenario not knowing the default might be even worse.
{quote}and Extra overhead as Ayush Saxena mentioned.

Audit log ( for debugging) and RPC call
{quote}
I think we have a common understanding with [~ayushtkn] here that the overhead 
would be worth it. [~ayushtkn] can you please comment?
{quote}If we really required why can't we do through getserverdefaults()(by 
adding EC field there).
{quote}
I think any change on the default EC policy would not be reflected in the 
serverdefaults on the client without config re-distribution that might also 
lead to confusions.

> ECAdmin NPE with -setPolicy -replicate
> --
>
> Key: HDFS-13998
> URL: https://issues.apache.org/jira/browse/HDFS-13998
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13998.01.patch, HDFS-13998.02.patch, 
> HDFS-13998.03.patch
>
>
> HDFS-13732 tried to improve the output of the console tool. But we missed the 
> fact that for replication, {{getErasureCodingPolicy}} would return null.
> This jira is to fix it in ECAdmin, and add a unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-389) Remove XceiverServer and XceiverClient and related classes

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-389:
--
Fix Version/s: (was: 0.3.0)

> Remove XceiverServer and XceiverClient and related classes
> --
>
> Key: HDDS-389
> URL: https://issues.apache.org/jira/browse/HDDS-389
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-389.001.patch, HDDS-389.002.patch, 
> HDDS-389.003.patch
>
>
> Grpc is now the default protocol for datanode to client communication. This 
> jira proposes to remove all the instances of the classes from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-352) Separate install and testing phases in acceptance tests.

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-352:
--
Fix Version/s: (was: 0.3.0)

> Separate install and testing phases in acceptance tests.
> 
>
> Key: HDDS-352
> URL: https://issues.apache.org/jira/browse/HDDS-352
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: test
> Fix For: 0.2.1
>
> Attachments: HDDS-352-ozone-0.2.001.patch, 
> HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, 
> HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, 
> HDDS-352-ozone-0.2.006.patch, HDDS-352.00.patch, TestRun.rtf
>
>
> In the current acceptance tests (hadoop-ozone/acceptance-test) the robot 
> files contain two kind of commands:
> 1) starting and stopping clusters
> 2) testing the basic behaviour with client calls
> It would be great to separate the two functionality and include only the 
> testing part in the robot files.
> 1. Ideally the tests could be executed in any environment. After a kubernetes 
> install I would like to do a smoke test. It could be a different environment 
> but I would like to execute most of the tests (check ozone cli, rest api, 
> etc.)
> 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs 
> + ozone cluster, etc.). We need to test all of them with all the tests.
> 3. With this approach we can collect the docker-compose files just in one 
> place (hadoop-dist project). After a docker-compose up there should be a way 
> to execute the tests with an existing cluster. Something like this:
> {code}
> docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test 
> -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh
> {code}
> 4. It also means that we need to execute the tests from a separated container 
> instance. We need a configuration parameter to define the cluster topology. 
> Ideally it could be just one environment variables with the url of the scm 
> and the scm could be used to discovery all of the required components + 
> download the configuration files from there.
> 5. Until now we used the log output of the docker-compose files to do some 
> readiness probes. They should be converted to poll the jmx endpoints and 
> check if the cluster is up and running. If we need the log files for 
> additional testing we can create multiple implementations for different type 
> of environments (docker-compose/kubernetes) and include the right set of 
> functions based on an external parameters.
> 6. Still we need a generic script under the ozone-acceptance test project to 
> run all the tests (starting the docker-compose clusters, execute tests in a 
> different container, stop the cluster) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683510#comment-16683510
 ] 

Hudson commented on HDDS-767:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15404 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15404/])
HDDS-767. OM should not search for STDOUT root logger for audit logging. (elek: 
rev 9c32b50d610463bb50a25bb01606ceeea8e04507)
* (edit) hadoop-ozone/dist/src/main/conf/om-audit-log4j2.properties


> OM should not search for STDOUT root logger for audit logging
> -
>
> Key: HDDS-767
> URL: https://issues.apache.org/jira/browse/HDDS-767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
> Attachments: HDDS-767.001.patch
>
>
> When we start ozone, the .out file shows the following line:
> {noformat}
> 2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
> logger config "root"{noformat}
> This is because the console appender has been disabled by default however 
> incorrect log4j2 config is still trying to find the console appender.
>  
> This Jira aims to comment the following config lines to avoid this issue:
> {code:java}
> rootLogger.appenderRefs=stdout
> rootLogger.appenderRef.stdout.ref=STDOUT
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-409) Ozone acceptance-test and integration-test packages have undefined hadoop component

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-409:
--
Fix Version/s: (was: 0.3.0)

> Ozone acceptance-test and integration-test packages have undefined hadoop 
> component
> ---
>
> Key: HDDS-409
> URL: https://issues.apache.org/jira/browse/HDDS-409
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-409.001.patch
>
>
> When building the ozone package, the acceptance-test and integration-test 
> packages create an UNDEF hadoop component in the share folder:
>  * 
> ./hadoop-ozone/acceptance-test/target/hadoop-ozone-acceptance-test-3.2.0-SNAPSHOT/share/hadoop/UNDEF/lib
>  * 
> ./hadoop-ozone/integration-test/target/hadoop-ozone-integration-test-0.2.1-SNAPSHOT/share/hadoop/UNDEF/lib
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-410) ozone scmcli list is not working properly

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-410:
--
Fix Version/s: (was: 0.3.0)

> ozone scmcli list is not working properly
> -
>
> Key: HDDS-410
> URL: https://issues.apache.org/jira/browse/HDDS-410
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-410.001.patch
>
>
> On running ozone scmcli for a container ID, it gives the following output :
>  
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 bin]# ./ozone scmcli list 
> --start=17
> Infinite recursion (StackOverflowError) (through reference chain: 
> 

[jira] [Updated] (HDDS-478) Log files related to each daemon doesn't have proper startup and shutdown logs

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-478:
--
Fix Version/s: 0.3.0

> Log files related to each daemon doesn't have proper startup and shutdown logs
> --
>
> Key: HDDS-478
> URL: https://issues.apache.org/jira/browse/HDDS-478
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Fix For: 0.3.0
>
> Attachments: HDDS-478.001.patch
>
>
> All the logs (startup/shutdown messages) go into ozone.log. We have a 
> separate log file for each daemon and that log file doesn't contain these 
> logs. 
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 logs]# cat ozone.log.2018-09-16 
> | head -20
> 2018-09-16 05:29:59,638 [main] INFO (LogAdapter.java:51) - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-468367-01-02.hwx.site/172.27.68.129
> STARTUP_MSG: args = [-createObjectStore]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2018-11-12 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683591#comment-16683591
 ] 

Kitti Nanasi commented on HDFS-14064:
-

Thanks [~ayushtkn] for working on this!
The code looks good to me, I just have minor comments about the tests:
- I think the IOException shouldn't be caught in the tests, because it is not 
expected and it will hide actual errors.
- The test case should fail if the policy is not found instead of silently 
succeeding
- I would do an assertion after the disablePolicy to make sure that we are 
really running the test on a disabled policy. For example if the default policy 
couldn't be disabled (which is not true currently), the enable policy test 
would succeed, but wouldn't really test anything. 




> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-401:
--
Fix Version/s: 0.3.0

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-461) Container remains in CLOSING state in SCM forever

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-461:
--
Fix Version/s: (was: 0.3.0)

> Container remains in CLOSING state in SCM forever
> -
>
> Key: HDDS-461
> URL: https://issues.apache.org/jira/browse/HDDS-461
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-461.00.patch, HDDS-461.01.patch, HDDS-461.02.patch, 
> HDDS-461.02.patch, HDDS-461.03.patch, all-node-ozone-logs-1536920345.tar.gz
>
>
> Container id # 13's state is not changing from CLOSING to CLOSED.
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 bin]# ./ozone scmcli info 13
> raft.rpc.type = GRPC (default)
> raft.grpc.message.size.max = 33554432 (custom)
> raft.client.rpc.retryInterval = 300 ms (default)
> raft.client.async.outstanding-requests.max = 100 (default)
> raft.client.async.scheduler-threads = 3 (default)
> raft.grpc.flow.control.window = 1MB (=1048576) (default)
> raft.grpc.message.size.max = 33554432 (custom)
> raft.client.rpc.request.timeout = 3000 ms (default)
> Container id: 13
> Container State: OPEN
> Container Path: 
> /tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/13/metadata
> Container Metadata:
> LeaderID: ctr-e138-1518143905142-459606-01-03.hwx.site
> Datanodes: 
> [ctr-e138-1518143905142-459606-01-07.hwx.site,ctr-e138-1518143905142-459606-01-08.hwx.site,ctr-e138-1518143905142-459606-01-03.hwx.site]{noformat}
>  
> snippet of scmcli list :
> {noformat}
> {
>  "state" : "CLOSING",
>  "replicationFactor" : "THREE",
>  "replicationType" : "RATIS",
>  "allocatedBytes" : 4831838208,
>  "usedBytes" : 4831838208,
>  "numberOfKeys" : 0,
>  "lastUsed" : 4391827471,
>  "stateEnterTime" : 5435591457,
>  "owner" : "f8332db1-b8b1-4077-a9ea-097033d074b7",
>  "containerID" : 13,
>  "deleteTransactionId" : 0,
>  "containerOpen" : true
> }{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-463) Fix the release packaging of the ozone distribution

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-463:
--
Fix Version/s: (was: 0.3.0)

> Fix the release packaging of the ozone distribution
> ---
>
> Key: HDDS-463
> URL: https://issues.apache.org/jira/browse/HDDS-463
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-463-ozone-0.2.001.patch, 
> HDDS-463-ozone-0.2.002.patch
>
>
> I found a few small problem during my test to release ozone:
> 1. The source assembly file still contains the ancient hdsl string in the name
> 2. The README of the binary distribution is confusing (this is Hadoop)
> 3. the binary distribution contains unnecessary test and source jar files
> 4. (Thanks to [~bharatviswa]): The log message after the dist creation is bad 
> (doesn't contain the restored version tag in the name)
> I combined these problems as all of the problems could be solved with very 
> small modifications...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-415) 'ozone om' with incorrect argument first logs all the STARTUP_MSG

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-415:
--
Fix Version/s: (was: 0.3.0)

> 'ozone om' with incorrect argument first logs all the STARTUP_MSG
> -
>
> Key: HDDS-415
> URL: https://issues.apache.org/jira/browse/HDDS-415
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-415.001.patch
>
>
>  bin/ozone om with incorrect argument first logs all the STARTUP_MSG
> {code:java}
> ➜ ozone-0.2.1-SNAPSHOT bin/ozone om -hgfj
> 2018-09-07 12:56:12,391 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = HW11469.local/10.22.16.67
> STARTUP_MSG: args = [-hgfj]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Updated] (HDDS-466) Handle null in argv of StorageContainerManager#createSCM

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-466:
--
Fix Version/s: (was: 0.3.0)

> Handle null in argv of StorageContainerManager#createSCM
> 
>
> Key: HDDS-466
> URL: https://issues.apache.org/jira/browse/HDDS-466
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-466.000.patch
>
>
> {{StorageContainerManager#createSCM}} takes {{String[]}} as an argument and 
> the same is used for constructing startup message, we have to check if the 
> value passed is null before constructing the startup message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-454) TestChunkStreams#testErrorReadGroupInputStream & TestChunkStreams#testReadGroupInputStream are failing

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-454:
--
Fix Version/s: (was: 0.3.0)

> TestChunkStreams#testErrorReadGroupInputStream & 
> TestChunkStreams#testReadGroupInputStream are failing
> --
>
> Key: HDDS-454
> URL: https://issues.apache.org/jira/browse/HDDS-454
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-454.001.patch
>
>
> TestChunkStreams.testErrorReadGroupInputStream & 
> TestChunkStreams.testReadGroupInputStream test-cases are failing with the 
> below error.
> {code}
> [ERROR] 
> testErrorReadGroupInputStream(org.apache.hadoop.ozone.om.TestChunkStreams)  
> Time elapsed: 0.058 s  <<< ERROR!
> java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.ozone.om.TestChunkStreams$2.getPos(TestChunkStreams.java:188)
>   at 
> org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.getPos(ChunkGroupInputStream.java:245)
>   at 
> org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.getRemaining(ChunkGroupInputStream.java:217)
>   at 
> org.apache.hadoop.ozone.client.io.ChunkGroupInputStream.read(ChunkGroupInputStream.java:118)
>   at 
> org.apache.hadoop.ozone.om.TestChunkStreams.testErrorReadGroupInputStream(TestChunkStreams.java:214)
> [ERROR] testReadGroupInputStream(org.apache.hadoop.ozone.om.TestChunkStreams) 
>  Time elapsed: 0.001 s  <<< ERROR!
> java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.ozone.om.TestChunkStreams$1.getPos(TestChunkStreams.java:134)
>   at 
> org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.getPos(ChunkGroupInputStream.java:245)
>   at 
> org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.getRemaining(ChunkGroupInputStream.java:217)
>   at 
> org.apache.hadoop.ozone.client.io.ChunkGroupInputStream.read(ChunkGroupInputStream.java:118)
>   at 
> org.apache.hadoop.ozone.om.TestChunkStreams.testReadGroupInputStream(TestChunkStreams.java:159)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-509) TestStorageContainerManager is flaky

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-509:
--
Fix Version/s: (was: 0.3.0)

> TestStorageContainerManager is flaky
> 
>
> Key: HDDS-509
> URL: https://issues.apache.org/jira/browse/HDDS-509
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-509-ozone-0.2.001.patch, 
> HDDS-509-ozone-0.2.002.patch
>
>
> {{TestStorageContainerManager}} is failing with the below exception
> {code}
> [ERROR] 
> testRpcPermission(org.apache.hadoop.ozone.TestStorageContainerManager)  Time 
> elapsed: 10.415 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[Access denied for user unknownUser. 
> Superuser privilege is required.]> but was:<[ChillModePrecheck failed for 
> allocateContainer]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.ozone.TestStorageContainerManager.verifyPermissionDeniedException(TestStorageContainerManager.java:195)
>   at 
> org.apache.hadoop.ozone.TestStorageContainerManager.testRpcPermissionWithConf(TestStorageContainerManager.java:156)
>   at 
> org.apache.hadoop.ozone.TestStorageContainerManager.testRpcPermission(TestStorageContainerManager.java:114)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> [ERROR] 
> testSCMChillModeRestrictedOp(org.apache.hadoop.ozone.TestStorageContainerManager)
>   Time elapsed: 5.564 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.ozone.TestStorageContainerManager.testSCMChillModeRestrictedOp(TestStorageContainerManager.java:595)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-502) Exception in OM startup when running unit tests

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-502:
--
Fix Version/s: (was: 0.3.0)

> Exception in OM startup when running unit tests
> ---
>
> Key: HDDS-502
> URL: https://issues.apache.org/jira/browse/HDDS-502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-502.01.patch
>
>
> The following exception is seen while starting OM via MiniOzoneCluster:
> {code}
> 2018-09-18 14:16:31,694 WARN  om.OzoneManager (LogAdapter.java:warn(59)) - 
> failed to register any UNIX signal loggers: 
> java.lang.IllegalStateException: Can't re-install the signal handlers.
>   at org.apache.hadoop.util.SignalLogger.register(SignalLogger.java:77)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:718)
>   at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:311)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:423)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:352)
>   at org.apache.hadoop.ozone.web.client.TestKeys.init(TestKeys.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> The exception is non-fatal so the tests eventually pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-468) Add version number to datanode plugin and ozone file system jar

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-468:
--
Fix Version/s: (was: 0.3.0)

> Add version number to datanode plugin and ozone file system jar
> ---
>
> Key: HDDS-468
> URL: https://issues.apache.org/jira/browse/HDDS-468
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-468.00.patch, HDDS-468.01.patch, HDDS-468.02.patch
>
>
> Below 2 jars are copied to distribution without any ozone version.
> hadoop-ozone-datanode-plugin.jar
> hadoop-ozone-filesystem.jar
>  
> Ozone version number should be appended at the end like other ozone jars have.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-417) Ambiguous error message when using genconf tool

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-417:
--
Fix Version/s: (was: 0.3.0)

> Ambiguous error message when using genconf tool
> ---
>
> Key: HDDS-417
> URL: https://issues.apache.org/jira/browse/HDDS-417
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-417.001.patch
>
>
> When using genconf tool and specifying output path as file name, ambiguous 
> error message is thrown.
>  
> {{aengineer@alpha ~/t/o/bin> ./ozone genconf -output 
> /Users/aengineer/ozone-site.xml}}
>  {{Invalid path or insufficient permission}}
>  
> Thanks [~anu] for spotting this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-429) StorageContainerManager lock optimization

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-429:
--
Fix Version/s: (was: 0.3.0)

> StorageContainerManager lock optimization
> -
>
> Key: HDDS-429
> URL: https://issues.apache.org/jira/browse/HDDS-429
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-429.000.patch
>
>
> Currently, {{StorageContainerManager}} uses {{ReentrantLock}} for 
> synchronization. We can replace this with {{ReentrantReadWriteLock}} to get 
> better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-438:
--
Fix Version/s: (was: 0.3.0)

> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, usability
> Fix For: 0.2.1
>
> Attachments: HDDS-438.001.patch, HDDS-438.002.patch, 
> HDDS-438.003.patch
>
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-456) TestOzoneShell#init is breaking due to Null Pointer Exception

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-456:
--
Fix Version/s: (was: 0.3.0)

> TestOzoneShell#init is breaking due to Null Pointer Exception
> -
>
> Key: HDDS-456
> URL: https://issues.apache.org/jira/browse/HDDS-456
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-456.001.patch
>
>
> Run TestOzoneShell in IDE and all tests will fail with following stacktrace:
>  
> {noformat}
> java.lang.NullPointerException
>  at java.util.Objects.requireNonNull(Objects.java:203)
>  at java.util.Arrays$ArrayList.(Arrays.java:3813)
>  at java.util.Arrays.asList(Arrays.java:3800)
>  at 
> org.apache.hadoop.util.StringUtils.createStartupShutdownMessage(StringUtils.java:746)
>  at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:714)
>  at 
> org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
>  at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:308)
>  at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:419)
>  at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:348)
>  at 
> org.apache.hadoop.ozone.ozShell.TestOzoneShell.init(TestOzoneShell.java:146)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>  at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>  at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-507) EventQueue should be shutdown on SCM shutdown

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-507:
--
Fix Version/s: (was: 0.3.0)

> EventQueue should be shutdown on SCM shutdown
> -
>
> Key: HDDS-507
> URL: https://issues.apache.org/jira/browse/HDDS-507
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-507-ozone-0.2.001.patch
>
>
> This can be repro-ed by when TestNodeFailure multiple times. Jenkins 
> sometimes also hit this. 
>  
> {code}
> Current thread (0x7fbe6f018800):  JavaThread 
> "EventQueue-PipelineCloseForPipelineCloseHandler" daemon [_thread_in_native, 
> id=58639, stack(0x700018009000,0x700018109000)]
>  
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0x0004001d
>  
>  
>  
> Stack: [0x700018009000,0x700018109000],  sp=0x700018108128,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> C  [librocksdbjni6372054043595793813.jnilib+0x163ac8]  
> rocksdb::GetColumnFamilyID(rocksdb::ColumnFamilyHandle*)+0x8
> C  [librocksdbjni6372054043595793813.jnilib+0x228368]  
> rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, 
> rocksdb::Slice const&, rocksdb::Slice const&)+0x58
> C  [librocksdbjni6372054043595793813.jnilib+0x2282fe]  
> rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice 
> const&)+0xe
> C  [librocksdbjni6372054043595793813.jnilib+0x171c84]  
> rocksdb::CompactedDBImpl::Open(rocksdb::Options const&, 
> std::__1::basic_string, 
> std::__1::allocator > const&, rocksdb::DB**)+0x2a4
> C  [librocksdbjni6372054043595793813.jnilib+0x971f7]  
> rocksdb_put_helper(JNIEnv_*, rocksdb::DB*, rocksdb::WriteOptions const&, 
> rocksdb::ColumnFamilyHandle*, _jbyteArray*, int, int, _jbyteArray*, int, 
> int)+0x137
> j  org.rocksdb.RocksDB.put(JJ[BII[BII)V+0
> j  org.rocksdb.RocksDB.put(Lorg/rocksdb/WriteOptions;[B[B)V+17
> j  org.apache.hadoop.utils.RocksDBStore.put([B[B)V+10
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.updatePipelineState(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;Lorg/apache/hadoop/hdds/protocol/proto/HddsProtos$LifeCycleEvent;)V+222
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.finalizePipeline(Lorg/apache/hadoop/hdds/scm/container/common/helpers/Pipeline;)V+75
> j  
> org.apache.hadoop.hdds.scm.container.ContainerMapping.handlePipelineClose(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;)V+18
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Lorg/apache/hadoop/hdds/scm/container/common/helpers/PipelineID;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+5
> j  
> org.apache.hadoop.hdds.scm.pipelines.PipelineCloseHandler.onMessage(Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V+6
> J 5844 C1 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(Lorg/apache/hadoop/hdds/server/events/EventHandler;Ljava/lang/Object;Lorg/apache/hadoop/hdds/server/events/EventPublisher;)V
>  (41 bytes) @ 0x000115c80bc4 [0x000115c80aa0+0x124]
> J 5670 C1 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor$$Lambda$143.run()V 
> (20 bytes) @ 0x0001168f625c [0x0001168f61c0+0x9c]
> j  
> java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
> J 3226 C1 java.util.concurrent.ThreadPoolExecutor$Worker.run()V (9 bytes) @ 
> 0x000116356e44 [0x000116356d40+0x104]
> J 3107 C1 java.lang.Thread.run()V (17 bytes) @ 0x000115d7b0c4 
> [0x000115d7af80+0x144]
> v  ~StubRoutines::call_stub
> V  [libjvm.dylib+0x2ef1f6]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x6ae
> V  [libjvm.dylib+0x2ef99a]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
> Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x164
> V  [libjvm.dylib+0x2efb46]  JavaCalls::call_virtual(JavaValue*, Handle, 
> KlassHandle, Symbol*, Symbol*, Thread*)+0x4a
> V  [libjvm.dylib+0x34a46d]  thread_entry(JavaThread*, Thread*)+0x7c
> V  [libjvm.dylib+0x56eb0f]  JavaThread::thread_main_inner()+0x9b
> V  [libjvm.dylib+0x57020a]  JavaThread::run()+0x1c2
> V  [libjvm.dylib+0x48d4a6]  java_start(Thread*)+0xf6
> C  [libsystem_pthread.dylib+0x3661]  _pthread_body+0x154
> C  [libsystem_pthread.dylib+0x350d]  _pthread_body+0x0
> C  [libsystem_pthread.dylib+0x2bf9]  thread_start+0xd
> C  0x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Updated] (HDDS-435) Enhance the existing ozone documentation

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-435:
--
Fix Version/s: (was: 0.3.0)

> Enhance the existing ozone documentation
> 
>
> Key: HDDS-435
> URL: https://issues.apache.org/jira/browse/HDDS-435
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-435-ozone-0.2.001.patch, 
> HDDS-435-ozone-0.2.004.patch, HDDS-435-ozone-0.2.005.patch, 
> HDDS-435.002.patch, HDDS-435.003.patch
>
>
> hadoop-ozone/docs contains some documentation but it covers only a limit set 
> of ozone features.
> I imported the documentation from HDFS-12664 (which was written by [~anu]) 
> and updated the files according to the latest changes.
> Also adjusted the structure of the documentation site (with using sub menus), 
> started to use syntax highlighting.
> I also modified the dist script to include the docs file in the root folder 
> of the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-425) Move unit test of the genconf tool to hadoop-ozone/tools module

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-425:
--
Fix Version/s: (was: 0.3.0)

> Move unit test of the genconf tool to hadoop-ozone/tools module
> ---
>
> Key: HDDS-425
> URL: https://issues.apache.org/jira/browse/HDDS-425
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-425.001.patch
>
>
> Based on review comment from [~elek] in HDDS-417, this Jira proposes to move 
> unit test of genconf tool to hadoop-ozone/tools module. It doesn't require 
> miniozone cluster so it shouldn't be in the integration test module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-503) Suppress ShellBasedUnixGroupsMapping exception in tests

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-503:
--
Fix Version/s: (was: 0.3.0)

> Suppress ShellBasedUnixGroupsMapping exception in tests
> ---
>
> Key: HDDS-503
> URL: https://issues.apache.org/jira/browse/HDDS-503
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-503.01.patch
>
>
> The following exception can be suppressed in unit tests. It is benign and 
> caused by tests using the username {{bilbo}} which is unlikely to be a valid 
> unix user on the test machine. Unless the machine is owned by a hobbit.
> {code}
> 2018-09-18 14:17:15,853 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(210)) - unable to return 
> groups for user bilbo
> PartialGroupNameException The user name 'bilbo' is not found. id: bilbo: no 
> such user
> id: bilbo: no such user
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-469) Rename 'ozone oz' to 'ozone sh'

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-469:
--
Fix Version/s: (was: 0.3.0)

> Rename 'ozone oz' to 'ozone sh'
> ---
>
> Key: HDDS-469
> URL: https://issues.apache.org/jira/browse/HDDS-469
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>  Labels: Incompatible
> Fix For: 0.2.1
>
> Attachments: HDDS-469.01.patch, HDDS-469.02.patch
>
>
> Ozone shell volume/bucket/key sub-commands are invoked using _ozone oz._ 
> _ozone oz_ sounds repetitive. Instead we can replace it with _ozone sh_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-449) Add a NULL check to protect DeadNodeHandler#onMessage

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-449:
--
Fix Version/s: (was: 0.3.0)

> Add a NULL check to protect DeadNodeHandler#onMessage
> -
>
> Key: HDDS-449
> URL: https://issues.apache.org/jira/browse/HDDS-449
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: EasyFix
> Fix For: 0.2.1
>
> Attachments: HDDS-449.000.patch
>
>
> Add a NULL check to protect the situation below(may only happened in the case 
> of unit test):
>  1.A new datanode register to SCM.
>  2. There is no container allocated in the new datanode temporarily.
>  3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
>  4.In function {{DeadNodeHandler#onMessage}}, there will get nothing in 
> {{node2ContainerMap}} and {{containers}} will be {{NULL}}
>  5.NullPointerException will be throwen in the following iterate of 
> {{containers}} like:
> {noformat}
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 
> s <<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
> [ERROR] 
> testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  
> Time elapsed: 0.33 s  <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
> at 
> org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-481) Classes are missing from the shaded ozonefs jar

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-481:
--
Fix Version/s: (was: 0.3.0)

> Classes are missing from the shaded ozonefs jar
> ---
>
> Key: HDDS-481
> URL: https://issues.apache.org/jira/browse/HDDS-481
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-481.00.patch
>
>
> Ozonefs acceptance test contains only one simple command which is executed on 
> an *ozone* node with ozone fs.
> Unfortunately the hdfs fs command doesn't work with ozone fs due to a missing 
> class file.
> To test:
> {code}
> cd hadoop-dist/target/ozone-0.2.1-SNAPSHOT/compose/ozonefs
> docker-compose exec scm ozone sh volume create --user hadoop /vol1
> docker-compose exec scm ozone sh bucket create /vol1/bucket
> docker-compose exec hadooplast hdfs dfs -ls o3://bucket.vol1/
> {code}
> The result is:
> {code}
> 2018-09-17 13:48:08 INFO  Configuration:3204 - Removed undeclared tags:
> 2018-09-17 13:48:08 INFO  Configuration:3204 - Removed undeclared tags:
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/ratis/shaded/proto/RaftProtos$ReplicationLevel
>   at 
> org.apache.hadoop.hdds.scm.ScmConfigKeys.(ScmConfigKeys.java:64)
>   at 
> org.apache.hadoop.ozone.OzoneConfigKeys.(OzoneConfigKeys.java:221)
>   at 
> org.apache.hadoop.ozone.client.OzoneBucket.(OzoneBucket.java:116)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getBucketDetails(RpcClient.java:420)
>   at 
> org.apache.hadoop.ozone.client.OzoneVolume.getBucket(OzoneVolume.java:199)
>   at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:122)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:249)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:232)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.ratis.shaded.proto.RaftProtos$ReplicationLevel
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 21 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-440) Datanode loops forever if it cannot create directories

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-440:
--
Fix Version/s: (was: 0.3.0)

> Datanode loops forever if it cannot create directories
> --
>
> Key: HDDS-440
> URL: https://issues.apache.org/jira/browse/HDDS-440
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-440.00.patch
>
>
> Datanode starts but runs in a tight loop forever if it cannot create the 
> DataNode ID directory e.g. due to permissions issues. I encountered this by 
> having a typo in my ozone-site.xml for {{ozone.scm.datanode.id}}.
> In just a few minutes the DataNode had generated over 20GB of log+out files 
> with the following exception:
> {code:java}
> 2018-09-12 17:28:20,649 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 2
> 63:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2018-09-12 17:28:20,648 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when 
> running task in Datanode State Mach
> ine Thread - 160
> 2018-09-12 17:28:20,650 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 1
> 60:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}
> We should just exit since this is a fatal issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-495) Ozone docs and ozonefs packages have undefined hadoop component

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-495:
--
Fix Version/s: (was: 0.3.0)

> Ozone docs and ozonefs packages have undefined hadoop component
> ---
>
> Key: HDDS-495
> URL: https://issues.apache.org/jira/browse/HDDS-495
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-495.001.patch
>
>
> When building the ozone package, the docs and ozonefs packages create an 
> UNDEF hadoop component in the share folder:
>  * 
> ./hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT/share/hadoop/UNDEF/lib
>  * 
> ./hadoop-ozone/docs/target/hadoop-ozone-docs-0.3.0-SNAPSHOT/share/hadoop/UNDEF/lib



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-487) Doc files are missing ASF license headers

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-487:
--
Fix Version/s: (was: 0.3.0)

> Doc files are missing ASF license headers
> -
>
> Key: HDDS-487
> URL: https://issues.apache.org/jira/browse/HDDS-487
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Namit Maheshwari
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-487.001.patch, HDDS-487.002.patch
>
>
> The following doc files are missing ASF license headers:
> {code}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-ozone/docs/content/BuildingSources.md
>  !? /testptch/hadoop/hadoop-ozone/docs/content/KeyCommands.md
>  !? /testptch/hadoop/hadoop-ozone/docs/content/Hdds.md
>  !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneManager.md
>  !? /testptch/hadoop/hadoop-ozone/docs/content/BucketCommands.md
>  !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneFS.md
>  !? /testptch/hadoop/hadoop-ozone/docs/content/VolumeCommands.md
>  !? /testptch/hadoop/hadoop-ozone/docs/content/JavaApi.md
>  !? /testptch/hadoop/hadoop-ozone/docs/content/RunningWithHDFS.md
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-501) AllocateBlockResponse.keyLocation must be an optional field

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-501:
--
Fix Version/s: (was: 0.3.0)

> AllocateBlockResponse.keyLocation must be an optional field
> ---
>
> Key: HDDS-501
> URL: https://issues.apache.org/jira/browse/HDDS-501
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-501.01.patch
>
>
> keyLocation may not be initialized if allocateBlock fails in the following 
> function:
> {code:java}
> public AllocateBlockResponse allocateBlock(RpcController controller,
> AllocateBlockRequest request) throws ServiceException {
>   AllocateBlockResponse.Builder resp =
>   AllocateBlockResponse.newBuilder();
>   try {
> KeyArgs keyArgs = request.getKeyArgs();
> OmKeyArgs omKeyArgs = new OmKeyArgs.Builder()
> .setVolumeName(keyArgs.getVolumeName())
> .setBucketName(keyArgs.getBucketName())
> .setKeyName(keyArgs.getKeyName())
> .build();
> OmKeyLocationInfo newLocation = impl.allocateBlock(omKeyArgs,
> request.getClientID());
> resp.setKeyLocation(newLocation.getProtobuf());
> resp.setStatus(Status.OK);
>   } catch (IOException e) {
> resp.setStatus(exceptionToResponseStatus(e));
>   }
>   return resp.build();
> }{code}
> Hence it must be an optional field. Else the protobuf builder exception 
> suppresses the real issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-497) Suppress license warnings for error log files

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-497:
--
Fix Version/s: (was: 0.3.0)

> Suppress license warnings for error log files
> -
>
> Key: HDDS-497
> URL: https://issues.apache.org/jira/browse/HDDS-497
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-497.01.patch, HDDS-497.02.patch
>
>
> Let's suppress ASF license warnings for JVM error files. e.g.
> {code}
> Lines that start with ? in the ASF License  report indicate files that do 
> not have an Apache license header:
>  !? /testptch/hadoop/hadoop-ozone/integration-test/hs_err_pid4508.log
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-424) Consolidate ozone oz parameters to use GNU convention

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-424:
--
Fix Version/s: (was: 0.3.0)

> Consolidate ozone oz parameters to use GNU convention
> -
>
> Key: HDDS-424
> URL: https://issues.apache.org/jira/browse/HDDS-424
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-424-ozone-0.2.001.patch
>
>
> In the common linux commands the convention is
> 1. avoid to use camelCase argument/flags
> 2. use double dash with words (--user) and singla dash with letters (-u) 
> I propose to modify ozone oz with:
> * Adding a second dash for all the word flags
> * Use 'key get', 'key info' instead of -infoKey
> * Define the input/output file name as a second argument instead of 
> --file/-file as it's always required



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-470) Ozone acceptance tests are failing

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-470:
--
Fix Version/s: (was: 0.3.0)

> Ozone acceptance tests are failing
> --
>
> Key: HDDS-470
> URL: https://issues.apache.org/jira/browse/HDDS-470
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-470-ozone-0.2.001.patch
>
>
> The following acceptance tests are failing in trunk:
> {code:java}
> ==
> Acceptance.Ozonefs.Ozonesinglenode :: Ozonefs Single Node Test
> ==
> Create volume and bucket | PASS |
> --
> Check volume from ozonefs | FAIL |
> 1 != 0
> --
> Create directory from ozonefs | FAIL |
> 1 != 0
> --
> Test key handling | FAIL |
> 2 != 0
> --
> Acceptance.Ozonefs.Ozonesinglenode :: Ozonefs Single Node Test | FAIL |
> 4 critical tests, 1 passed, 3 failed
> 4 tests total, 1 passed, 3 failed
> ==
> Acceptance.Ozonefs | FAIL |
> 7 critical tests, 4 passed, 3 failed
> 7 tests total, 4 passed, 3 failed
> ==
> Acceptance | FAIL |
> 16 critical tests, 13 passed, 3 failed
> 16 tests total, 13 passed, 3 failed
> =={code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-496) Ozone tools module is incorrectly classified as 'hdds' component

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-496:
--
Fix Version/s: (was: 0.3.0)

> Ozone tools module is incorrectly classified as 'hdds' component
> 
>
> Key: HDDS-496
> URL: https://issues.apache.org/jira/browse/HDDS-496
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-496.001.patch
>
>
> ~/hadoop/hadoop-ozone/tools is incorrectly classified as 'hdds' component and 
> thus we see the following:
> ~/hadoop/hadoop-ozone/tools/target/hadoop-ozone-tools-0.3.0-SNAPSHOT/share/hadoop/{color:#d04437}hdds{color}/lib
> To correct this, it must be classified as 'ozone' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >