[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623096#comment-16623096
 ] 

Lokesh Jain commented on HDFS-13876:


[~smeng] Thanks for updating the patch! I have a few minor comments.
 # TestHttpFSServer#testDisallowSnapshot:1164 - Comment should be "FileStatus 
should (not) have snapshot enabled bit set"
 # BaseTestHttpFSWith#testDisallowSnapshotException:1431 - Error condition 
should be "disallowSnapshot should not have succeeded".
 # Can you please fix the checkstyle issues?

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch, 
> HDFS-13876.003.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-526) Clean previous chill mode code from NodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-526:

Summary: Clean previous chill mode code from NodeManager.   (was: Clean 
previous chill mode code from SCMNodeManager. )

> Clean previous chill mode code from NodeManager. 
> -
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-526.00.patch
>
>
> Clean previous chill mode code from NodeManager, BlockManagerImpl and add jmx 
> attribute for chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-526) Clean previous chill mode code from SCMNodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-526:

Description: Clean previous chill mode code from NodeManager, 
BlockManagerImpl and add jmx attribute for chill mode status.   (was: Clean 
previous chill mode code from NodeManager and add jmx attribute for chill mode 
status. )

> Clean previous chill mode code from SCMNodeManager. 
> 
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-526.00.patch
>
>
> Clean previous chill mode code from NodeManager, BlockManagerImpl and add jmx 
> attribute for chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623078#comment-16623078
 ] 

Lokesh Jain commented on HDFS-13893:


[~arpitagarwal] Thanks for reviewing the patch! I have used CommandLine.getArgs 
in the patch. For a command used below.

 
{code:java}
hdfs diskbalancer random1 -report random2 random3
{code}
the getArgs() would return the below array.
{code:java}
[hdfs, diskbalancer, random1, random2, random3]{code}
Therefore the patch throws exception if args.length > 2.

 

> DiskBalancer: no validations for Disk balancer commands 
> 
>
> Key: HDFS-13893
> URL: https://issues.apache.org/jira/browse/HDFS-13893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13893.001.patch
>
>
> {{Scenario:-}}
>  
>  1 Run the Disk Balancer commands with extra arguments passing  
> {noformat} 
> hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
> *sgfsdgfs*
> 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
> hostname:50077
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
> fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
> hostname:50077 took 23 ms
> 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
> 2018-08-31 14:57:35,457 INFO command.Command: 
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> Writing plan to:
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> {noformat} 
> Expected Output:- 
> =
> Disk balancer commands should be fail if we pass any invalid arguments or 
> extra arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-526) Clean previous chill mode code from SCMNodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-526:

Assignee: Ajay Kumar
  Status: Patch Available  (was: Open)

> Clean previous chill mode code from SCMNodeManager. 
> 
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-526.00.patch
>
>
> Clean previous chill mode code from NodeManager and add jmx attribute for 
> chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623076#comment-16623076
 ] 

Xiaoyu Yao commented on HDDS-370:
-

[~ajayydv], thanks for the update. The patch v4 looks good to me. We will need 
to remove {color:#3b73af}the empty test class{color}TestRpcClient.java and some 
checkstyle issue. It will be great if you can incorporate [~anu] 's suggestion 
in your next patch.

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-370.00.patch, HDDS-370.01.patch, HDDS-370.02.patch, 
> HDDS-370.03.patch, HDDS-370.04.patch, HDDS-370.05.patch
>
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-526) Clean previous chill mode code from SCMNodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-526:

Attachment: HDDS-526.00.patch

> Clean previous chill mode code from SCMNodeManager. 
> 
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
> Attachments: HDDS-526.00.patch
>
>
> Clean previous chill mode code from NodeManager and add jmx attribute for 
> chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-526) Clean previous chill mode code from SCMNodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-526:

Attachment: (was: HDDS-526.00.PATCH)

> Clean previous chill mode code from SCMNodeManager. 
> 
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
>
> Clean previous chill mode code from NodeManager and add jmx attribute for 
> chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-526) Clean previous chill mode code from SCMNodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-526:

Attachment: HDDS-526.00.PATCH

> Clean previous chill mode code from SCMNodeManager. 
> 
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
>
> Clean previous chill mode code from NodeManager and add jmx attribute for 
> chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-526) Clean previous chill mode code from SCMNodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-526:

Description: Clean previous chill mode code from NodeManager and add jmx 
attribute for chill mode status.   (was: Clean previous chill mode code from 
SCMNodeManager. )

> Clean previous chill mode code from SCMNodeManager. 
> 
>
> Key: HDDS-526
> URL: https://issues.apache.org/jira/browse/HDDS-526
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
>
> Clean previous chill mode code from NodeManager and add jmx attribute for 
> chill mode status. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-391) Simplify Audit Framework to make audit logging easier to use

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623047#comment-16623047
 ] 

Hadoop QA commented on HDDS-391:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940709/HDDS-391.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a07d65223fad 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 524f7cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1180/testReport/ |
| Max. process+thread count | 331 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common 

[jira] [Commented] (HDFS-13840) RBW Blocks which are having less GS should be added to Corrupt

2018-09-20 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623018#comment-16623018
 ] 

Surendra Singh Lilhore commented on HDFS-13840:
---

+1 for latest patch..

> RBW Blocks which are having less GS should be added to Corrupt
> --
>
> Key: HDFS-13840
> URL: https://issues.apache.org/jira/browse/HDFS-13840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840-005.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-391) Simplify Audit Framework to make audit logging easier to use

2018-09-20 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-391:
---
Attachment: HDDS-391.001.patch
Status: Patch Available  (was: In Progress)

Changes:
 # AuditMessage implemented with Builder pattern
 # AuditMessage now includes throwable as member
 # New Interface - Auditor : Must be implemented by class where we want to 
audit events
 # Auditor interface has 2 methods to build auditMessage and must be 
implemented by actor class.
 # Simplified AuditLogger to remove methods that will not be used
 # Success events will be logged as INFO and Failure events as ERROR
 # Since Audit is a controlled activity, removed the option to configure log 
level programmatically which could have been used to override default log level 
as described above. This has come from a feedback from Jitendra in our last 
discussion.
 # log4j2.properties and AuditMessage now use '|' as a delimiter
 # Updated tests to use Builder

 

cc: [~anu], [~ajayydv] for review

> Simplify Audit Framework to make audit logging easier to use
> 
>
> Key: HDDS-391
> URL: https://issues.apache.org/jira/browse/HDDS-391
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-391.001.patch
>
>
> In HDDS-376 a customer AuditMessage structure was created for use in Audit 
> Logging.
> This Jira proposes to incorporate [suggestive 
> improvements|https://issues.apache.org/jira/browse/HDDS-376?focusedCommentId=16594170=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16594170]
>  from [~ajayydv].
>  * AuditMessage should encapsulate log level, audit status, message and 
> exception.
>  * AuditMessage should use the | delimited instead of space. This will 
> specially be useful when AuditParser is completed as part of HDDS-393



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-391) Simplify Audit Framework to make audit logging easier to use

2018-09-20 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-391:
---
Summary: Simplify Audit Framework to make audit logging easier to use  
(was: Simplify AuditMessage structure to make audit logging easier to use)

> Simplify Audit Framework to make audit logging easier to use
> 
>
> Key: HDDS-391
> URL: https://issues.apache.org/jira/browse/HDDS-391
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> In HDDS-376 a customer AuditMessage structure was created for use in Audit 
> Logging.
> This Jira proposes to incorporate [suggestive 
> improvements|https://issues.apache.org/jira/browse/HDDS-376?focusedCommentId=16594170=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16594170]
>  from [~ajayydv].
>  * AuditMessage should encapsulate log level, audit status, message and 
> exception.
>  * AuditMessage should use the | delimited instead of space. This will 
> specially be useful when AuditParser is completed as part of HDDS-393



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622954#comment-16622954
 ] 

Hadoop QA commented on HDFS-13882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 159 
unchanged - 2 fixed = 159 total (was 161) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13882 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940691/HDFS-13882.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 78bfe5a71df5 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.

2018-09-20 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622948#comment-16622948
 ] 

Chen Liang commented on HDFS-13873:
---

Thanks [~csun]!

I had some thoughts on this, sharing here for references:

version 1: have a tracker that, whenever client sends request to Observer, 
trackers records Observer's current state id X, and timestamp tx comparing to 
previous value Y and previous timestamp ty and t = (X - Y) / (tx - ty) gives an 
estimation of how long it takes for observer to proceed one txid. (this can be 
measured as moving average for better accuracy). And say delta = clientStateId 
- X, then delta * t gives the estimate time of when the client request can 
start being processed i.e. the msync wait time.

Plan 2: instead of tracking Observer state id increasing rate. We could also 
have t = the average time of processing one request. (This needs more code to 
measure time spent for a request to be in the queue until finished). The delta 
* t then becomes the estimate of when the client request will actually finish.

version 2 requires more code changes, but is able to handle the case that, 
Observer state id is actually not too far behind, but Observer node itself is 
being too slow, causing still a long processing time of a request. Which is not 
captured by version 1. The downside though, it seemed to me there can be cases 
where version 2 can reject many calls over-aggressively. Also addressing slow 
Observer seems a bit beyond the scope of this Jira.

I would say maybe we can go with the simpler of version 1 first and see how it 
works out. Any comments [~csun], [~shv]?

> ObserverNode should reject read requests when it is too far behind.
> ---
>
> Key: HDFS-13873
> URL: https://issues.apache.org/jira/browse/HDFS-13873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
>
> Add a server-side threshold for ObserverNode to reject read requests when it 
> is too far behind.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622936#comment-16622936
 ] 

Hadoop QA commented on HDDS-370:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 22m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 54s{color} | {color:orange} root: The patch generated 5 new + 0 unchanged - 
0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 26s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-370 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622930#comment-16622930
 ] 

Hadoop QA commented on HDFS-13791:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
34s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 82m 
15s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940687/HDFS-13791-HDFS-12943.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bca0d1c31139 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 77e106f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25114/testReport/ |
| Max. process+thread count | 3264 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25114/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-20 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622867#comment-16622867
 ] 

Anu Engineer commented on HDDS-370:
---

[~ajayydv] I am +1; had a small sugggestion if you are going to post a new 
patch. 

{{rpc isScmInChillMode(IsScmInChillModeRequestProto)}} are functions in the SCM 
client interface already. We can lose {{Scm}} from those names, also in the 
forceFunction call. Thx.

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-370.00.patch, HDDS-370.01.patch, HDDS-370.02.patch, 
> HDDS-370.03.patch, HDDS-370.04.patch, HDDS-370.05.patch
>
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-20 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622867#comment-16622867
 ] 

Anu Engineer edited comment on HDDS-370 at 9/20/18 11:30 PM:
-

[~ajayydv] I am +1; had a small sugggestion if you are going to post a new 
patch.

{{rpc isScmInChillMode(IsScmInChillModeRequestProto)}} are functions in the SCM 
client interface already. We can lose {{Scm}} from those names, also in the 
forceFunction call. Thx.

 

Don't post a new patch. I wrote this comment a while ago and just committed it. 
No need for a new patch.


was (Author: anu):
[~ajayydv] I am +1; had a small sugggestion if you are going to post a new 
patch. 

{{rpc isScmInChillMode(IsScmInChillModeRequestProto)}} are functions in the SCM 
client interface already. We can lose {{Scm}} from those names, also in the 
forceFunction call. Thx.

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-370.00.patch, HDDS-370.01.patch, HDDS-370.02.patch, 
> HDDS-370.03.patch, HDDS-370.04.patch, HDDS-370.05.patch
>
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-528) add cli command to checkChill mode status and exit chill mode

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-528:

Description: 
[HDDS-370] introduces below 2 API:
* isScmInChillMode
* forceScmExitChillMode
This jira is to call them via relevant cli command.

  was:add cli command to checkChill mode status and exit chill mode


> add cli command to checkChill mode status and exit chill mode
> -
>
> Key: HDDS-528
> URL: https://issues.apache.org/jira/browse/HDDS-528
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
>
> [HDDS-370] introduces below 2 API:
> * isScmInChillMode
> * forceScmExitChillMode
> This jira is to call them via relevant cli command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-20 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622849#comment-16622849
 ] 

Kitti Nanasi commented on HDFS-13882:
-

Thanks for the comments [~xiaochen]!

The test failure in TestHdfsConfigFields was related and I fixed it. The other 
test failures do not seem related.

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-20 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13882:

Attachment: HDFS-13882.005.patch

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-370:

Attachment: HDDS-370.05.patch

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-370.00.patch, HDDS-370.01.patch, HDDS-370.02.patch, 
> HDDS-370.03.patch, HDDS-370.04.patch, HDDS-370.05.patch
>
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-368) all tests in TestOzoneRestClient failed due to "zh_CN" OS language

2018-09-20 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622823#comment-16622823
 ] 

Tsz Wo Nicholas Sze commented on HDDS-368:
--

> Java version: 1.8.0_111

Could you also try updating it?  Mine is  1.8.0_172.

> FYI, Once the string transfered by HTTP have Chinese character(or character 
> not in english letters and numbers), The "string".length() will shorter than 
> the "string".getBytes().length, and then the data will be truncated by 
> transfer and the error occured.

Do you see a way to fix it?

> all tests in TestOzoneRestClient failed due to "zh_CN" OS language
> --
>
> Key: HDDS-368
> URL: https://issues.apache.org/jira/browse/HDDS-368
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: LiXin Ge
>Priority: Critical
>  Labels: alpha2
>
> OS: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-116-generic x86_64)
> java version: 1.8.0_111
> mvn: Apache Maven 3.3.9
> Default locale: zh_CN, platform encoding: UTF-8
> Test command: mvn test -Dtest=TestOzoneRestClient -Phdds
>  
>  All the tests in TestOzoneRestClient failed in my local machine with 
> exception like below, does it mean anybody who have runtime environment like 
> me can't run the Ozone Rest test now?
> {noformat}
> [ERROR] 
> testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient) 
> Time elapsed: 0.01 s <<< ERROR!
> java.io.IOException: org.apache.hadoop.ozone.client.rest.OzoneException: 
> Unparseable date: "m, 28 1970 19:23:50 GMT"
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:853)
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:252)
>  at 
> org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:210)
>  at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>  at com.sun.proxy.$Proxy73.createVolume(Unknown Source)
>  at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:66)
>  at 
> org.apache.hadoop.ozone.client.rest.TestOzoneRestClient.testCreateBucket(TestOzoneRestClient.java:174)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> Caused by: org.apache.hadoop.ozone.client.rest.OzoneException: Unparseable 
> date: "m, 28 1970 19:23:50 GMT"
> at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
> at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:270)
> at 
> com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:149)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
> at 
> com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
> at 
> com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
> at 
> org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
> ... 39 more
> {noformat}
> or like:
> {noformat}
> [ERROR] Failures:
> [ERROR]   TestOzoneRestClient.testDeleteKey
> Expected: exception with message a string containing "Lookup key failed, 
> error"
>  but: message was "Unexpected end-of-input within/between Object entries
>  at [Source: (String)"{
>   "owner" : {
> "name" : "hadoop"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "f93ed82d-dff6-4b75-a1c5-6a0fef5aa6dd",
>   "createdOn" : "���, 06 ��� +50611 08:28:21 GMT",
>   "createdBy" "; line: 11, column: 251]"
> Stacktrace was: com.fasterxml.jackson.core.io.JsonEOFException: Unexpected 
> end-of-input within/between Object entries
>  at [Source: (String)"{
>   "owner" : {
> "name" : "hadoop"
>   

[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-20 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622819#comment-16622819
 ] 

Erik Krogen commented on HDFS-13791:


Just attached v004 patch rebasing on top of the changes in HADOOP-15726.

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch, HDFS-13791-HDFS-12943.002.patch, 
> HDFS-13791-HDFS-12943.003.patch, HDFS-13791-HDFS-12943.004.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-20 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13791:
---
Attachment: HDFS-13791-HDFS-12943.004.patch

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch, HDFS-13791-HDFS-12943.002.patch, 
> HDFS-13791-HDFS-12943.003.patch, HDFS-13791-HDFS-12943.004.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-20 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622805#comment-16622805
 ] 

Ajay Kumar commented on HDDS-370:
-

[~xyao] good suggestion. Consolidated tests from TestStorageContainerManger, 
TestRpcClient to  {{TestScmChillMode}}?

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-370.00.patch, HDDS-370.01.patch, HDDS-370.02.patch, 
> HDDS-370.03.patch, HDDS-370.04.patch
>
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13930) Fix crlf line endings in HDFS-12943 branch

2018-09-20 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622801#comment-16622801
 ] 

Erik Krogen commented on HDFS-13930:


That was the commit that was merged, but as far as I can tell, the the changes 
were introduced during the merge, not during HADOOP-15707.

> Fix crlf line endings in HDFS-12943 branch
> --
>
> Key: HDFS-13930
> URL: https://issues.apache.org/jira/browse/HDFS-13930
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> One of the merge commits introduced the wrong line endings to some {{*.cmd}} 
> files. Looks like it was commit {{1363eff69c3}} that broke it.
> The tree is:
> {code}
> * |   1363eff69c3 2018-09-17 Merge commit 
> '9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4' into HDFS-12943 [Konstantin V 
> Shvachko ]
> |\ \
> | |/
> | * 9af96d4ed4b 2018-09-05 HADOOP-15707. Add IsActiveServlet to be used for 
> Load Balancers. Contributed by Lukas Majercak. [Giovanni Matteo Fumarola 
> ]
> * |   94d7f90e93b 2018-09-17 Merge commit 
> 'e780556ae9229fe7a90817eb4e5449d7eed35dd8' into HDFS-12943 [Konstantin V 
> Shvachko ]
> {code}
> So that merge commit should have only introduced a single new commit 
> {{9af96d4ed4b}}. But:
> {code}
> ± git show --stat 9af96d4ed4b | cat
> commit 9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4
> Author: Giovanni Matteo Fumarola 
> Date:   Wed Sep 5 10:50:25 2018 -0700
> HADOOP-15707. Add IsActiveServlet to be used for Load Balancers. 
> Contributed by Lukas Majercak.
>  .../org/apache/hadoop/http/IsActiveServlet.java| 71 
>  .../apache/hadoop/http/TestIsActiveServlet.java| 95 
> ++
>  .../federation/router/IsRouterActiveServlet.java   | 37 +
>  .../server/federation/router/RouterHttpServer.java |  9 ++
>  .../src/site/markdown/HDFSRouterFederation.md  |  2 +-
>  .../server/namenode/IsNameNodeActiveServlet.java   | 33 
>  .../hdfs/server/namenode/NameNodeHttpServer.java   |  3 +
>  .../site/markdown/HDFSHighAvailabilityWithQJM.md   |  8 ++
>  .../IsResourceManagerActiveServlet.java| 38 +
>  .../server/resourcemanager/ResourceManager.java|  5 ++
>  .../resourcemanager/webapp/RMWebAppFilter.java |  3 +-
>  .../src/site/markdown/ResourceManagerHA.md |  5 ++
>  12 files changed, 307 insertions(+), 2 deletions(-)
> {code}
> that commit has no changes to the cmd, whereas the merge commit does:
> {code}
> ± git show --stat 1363eff69c3 | cat
> commit 1363eff69c36c4f2085194b59a86370505cc00cd
> Merge: 94d7f90e93b 9af96d4ed4b
> Author: Konstantin V Shvachko 
> Date:   Mon Sep 17 17:39:11 2018 -0700
> Merge commit '9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4' into HDFS-12943
> # Conflicts:
> #   
> hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
>  .../hadoop-common/src/main/bin/start-all.cmd   | 104 
> ++---
>  .../hadoop-common/src/main/bin/stop-all.cmd| 104 
> ++---
>  .../org/apache/hadoop/http/IsActiveServlet.java|  71 ++
>  .../apache/hadoop/http/TestIsActiveServlet.java|  95 +++
>  .../federation/router/IsRouterActiveServlet.java   |  37 
>  .../server/federation/router/RouterHttpServer.java |   9 ++
>  .../src/site/markdown/HDFSRouterFederation.md  |   2 +-
>  .../hadoop-hdfs/src/main/bin/hdfs-config.cmd   |  86 -
>  .../hadoop-hdfs/src/main/bin/start-dfs.cmd |  82 
>  .../hadoop-hdfs/src/main/bin/stop-dfs.cmd  |  82 
>  .../server/namenode/IsNameNodeActiveServlet.java   |  33 +++
>  .../hdfs/server/namenode/NameNodeHttpServer.java   |   3 +
>  .../site/markdown/HDFSHighAvailabilityWithQJM.md   |   8 ++
>  hadoop-mapreduce-project/bin/mapred-config.cmd |  86 -
>  hadoop-tools/hadoop-streaming/src/test/bin/cat.cmd |  36 +++
>  .../hadoop-streaming/src/test/bin/xargs_cat.cmd|  36 +++
>  hadoop-yarn-project/hadoop-yarn/bin/start-yarn.cmd |  94 +--
>  hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.cmd  |  94 +--
>  .../IsResourceManagerActiveServlet.java|  38 
>  .../server/resourcemanager/ResourceManager.java|   5 +
>  .../resourcemanager/webapp/RMWebAppFilter.java |   3 +-
>  .../src/site/markdown/ResourceManagerHA.md |   5 +
>  22 files changed, 709 insertions(+), 404 deletions(-)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-528) add cli command to checkChill mode status and exit chill mode

2018-09-20 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-528:
---

 Summary: add cli command to checkChill mode status and exit chill 
mode
 Key: HDDS-528
 URL: https://issues.apache.org/jira/browse/HDDS-528
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Ajay Kumar


add cli command to checkChill mode status and exit chill mode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-391) Simplify AuditMessage structure to make audit logging easier to use

2018-09-20 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-391:
---
Description: 
In HDDS-376 a customer AuditMessage structure was created for use in Audit 
Logging.

This Jira proposes to incorporate [suggestive 
improvements|https://issues.apache.org/jira/browse/HDDS-376?focusedCommentId=16594170=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16594170]
 from [~ajayydv].
 * AuditMessage should encapsulate log level, audit status, message and 
exception.
 * AuditMessage should use the | delimited instead of space. This will 
specially be useful when AuditParser is completed as part of HDDS-393

  was:
In HDDS-376 a customer AuditMessage structure was created for use in Audit 
Logging.

This Jira proposes to incorporate suggestive improvements from [~ajayydv].
 * AuditMessage should encapsulate log level, audit status, message and 
exception.
 * AuditMessage should use the | delimited instead of space. This will 
specially be useful when AuditParser is completed as part of HDDS-393


> Simplify AuditMessage structure to make audit logging easier to use
> ---
>
> Key: HDDS-391
> URL: https://issues.apache.org/jira/browse/HDDS-391
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> In HDDS-376 a customer AuditMessage structure was created for use in Audit 
> Logging.
> This Jira proposes to incorporate [suggestive 
> improvements|https://issues.apache.org/jira/browse/HDDS-376?focusedCommentId=16594170=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16594170]
>  from [~ajayydv].
>  * AuditMessage should encapsulate log level, audit status, message and 
> exception.
>  * AuditMessage should use the | delimited instead of space. This will 
> specially be useful when AuditParser is completed as part of HDDS-393



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1915) fuse-dfs does not support append

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622796#comment-16622796
 ] 

Hadoop QA commented on HDFS-1915:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  3m 38s{color} | 
{color:red} hadoop-hdfs-project generated 3 new + 2 unchanged - 0 fixed = 5 
total (was 2) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 45s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-1915 |
| JIRA Patch URL | 

[jira] [Created] (HDDS-527) Show SCM chill mode status in SCM UI.

2018-09-20 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-527:
---

 Summary: Show SCM chill mode status in SCM UI.
 Key: HDDS-527
 URL: https://issues.apache.org/jira/browse/HDDS-527
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Ajay Kumar


Show SCM chill mode status in SCM UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13930) Fix crlf line endings in HDFS-12943 branch

2018-09-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622793#comment-16622793
 ] 

Íñigo Goiri commented on HDFS-13930:


[~xkrogen], is the commit for HADOOP-15707?
CC [~giovanni.fumarola]

> Fix crlf line endings in HDFS-12943 branch
> --
>
> Key: HDFS-13930
> URL: https://issues.apache.org/jira/browse/HDFS-13930
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> One of the merge commits introduced the wrong line endings to some {{*.cmd}} 
> files. Looks like it was commit {{1363eff69c3}} that broke it.
> The tree is:
> {code}
> * |   1363eff69c3 2018-09-17 Merge commit 
> '9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4' into HDFS-12943 [Konstantin V 
> Shvachko ]
> |\ \
> | |/
> | * 9af96d4ed4b 2018-09-05 HADOOP-15707. Add IsActiveServlet to be used for 
> Load Balancers. Contributed by Lukas Majercak. [Giovanni Matteo Fumarola 
> ]
> * |   94d7f90e93b 2018-09-17 Merge commit 
> 'e780556ae9229fe7a90817eb4e5449d7eed35dd8' into HDFS-12943 [Konstantin V 
> Shvachko ]
> {code}
> So that merge commit should have only introduced a single new commit 
> {{9af96d4ed4b}}. But:
> {code}
> ± git show --stat 9af96d4ed4b | cat
> commit 9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4
> Author: Giovanni Matteo Fumarola 
> Date:   Wed Sep 5 10:50:25 2018 -0700
> HADOOP-15707. Add IsActiveServlet to be used for Load Balancers. 
> Contributed by Lukas Majercak.
>  .../org/apache/hadoop/http/IsActiveServlet.java| 71 
>  .../apache/hadoop/http/TestIsActiveServlet.java| 95 
> ++
>  .../federation/router/IsRouterActiveServlet.java   | 37 +
>  .../server/federation/router/RouterHttpServer.java |  9 ++
>  .../src/site/markdown/HDFSRouterFederation.md  |  2 +-
>  .../server/namenode/IsNameNodeActiveServlet.java   | 33 
>  .../hdfs/server/namenode/NameNodeHttpServer.java   |  3 +
>  .../site/markdown/HDFSHighAvailabilityWithQJM.md   |  8 ++
>  .../IsResourceManagerActiveServlet.java| 38 +
>  .../server/resourcemanager/ResourceManager.java|  5 ++
>  .../resourcemanager/webapp/RMWebAppFilter.java |  3 +-
>  .../src/site/markdown/ResourceManagerHA.md |  5 ++
>  12 files changed, 307 insertions(+), 2 deletions(-)
> {code}
> that commit has no changes to the cmd, whereas the merge commit does:
> {code}
> ± git show --stat 1363eff69c3 | cat
> commit 1363eff69c36c4f2085194b59a86370505cc00cd
> Merge: 94d7f90e93b 9af96d4ed4b
> Author: Konstantin V Shvachko 
> Date:   Mon Sep 17 17:39:11 2018 -0700
> Merge commit '9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4' into HDFS-12943
> # Conflicts:
> #   
> hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
>  .../hadoop-common/src/main/bin/start-all.cmd   | 104 
> ++---
>  .../hadoop-common/src/main/bin/stop-all.cmd| 104 
> ++---
>  .../org/apache/hadoop/http/IsActiveServlet.java|  71 ++
>  .../apache/hadoop/http/TestIsActiveServlet.java|  95 +++
>  .../federation/router/IsRouterActiveServlet.java   |  37 
>  .../server/federation/router/RouterHttpServer.java |   9 ++
>  .../src/site/markdown/HDFSRouterFederation.md  |   2 +-
>  .../hadoop-hdfs/src/main/bin/hdfs-config.cmd   |  86 -
>  .../hadoop-hdfs/src/main/bin/start-dfs.cmd |  82 
>  .../hadoop-hdfs/src/main/bin/stop-dfs.cmd  |  82 
>  .../server/namenode/IsNameNodeActiveServlet.java   |  33 +++
>  .../hdfs/server/namenode/NameNodeHttpServer.java   |   3 +
>  .../site/markdown/HDFSHighAvailabilityWithQJM.md   |   8 ++
>  hadoop-mapreduce-project/bin/mapred-config.cmd |  86 -
>  hadoop-tools/hadoop-streaming/src/test/bin/cat.cmd |  36 +++
>  .../hadoop-streaming/src/test/bin/xargs_cat.cmd|  36 +++
>  hadoop-yarn-project/hadoop-yarn/bin/start-yarn.cmd |  94 +--
>  hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.cmd  |  94 +--
>  .../IsResourceManagerActiveServlet.java|  38 
>  .../server/resourcemanager/ResourceManager.java|   5 +
>  .../resourcemanager/webapp/RMWebAppFilter.java |   3 +-
>  .../src/site/markdown/ResourceManagerHA.md |   5 +
>  22 files changed, 709 insertions(+), 404 deletions(-)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13930) Fix crlf line endings in HDFS-12943 branch

2018-09-20 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-13930:
--

 Summary: Fix crlf line endings in HDFS-12943 branch
 Key: HDFS-13930
 URL: https://issues.apache.org/jira/browse/HDFS-13930
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Erik Krogen
Assignee: Erik Krogen


One of the merge commits introduced the wrong line endings to some {{*.cmd}} 
files. Looks like it was commit {{1363eff69c3}} that broke it.

The tree is:
{code}
* |   1363eff69c3 2018-09-17 Merge commit 
'9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4' into HDFS-12943 [Konstantin V 
Shvachko ]
|\ \
| |/
| * 9af96d4ed4b 2018-09-05 HADOOP-15707. Add IsActiveServlet to be used for 
Load Balancers. Contributed by Lukas Majercak. [Giovanni Matteo Fumarola 
]
* |   94d7f90e93b 2018-09-17 Merge commit 
'e780556ae9229fe7a90817eb4e5449d7eed35dd8' into HDFS-12943 [Konstantin V 
Shvachko ]
{code}
So that merge commit should have only introduced a single new commit 
{{9af96d4ed4b}}. But:
{code}
± git show --stat 9af96d4ed4b | cat
commit 9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4
Author: Giovanni Matteo Fumarola 
Date:   Wed Sep 5 10:50:25 2018 -0700

HADOOP-15707. Add IsActiveServlet to be used for Load Balancers. 
Contributed by Lukas Majercak.

 .../org/apache/hadoop/http/IsActiveServlet.java| 71 
 .../apache/hadoop/http/TestIsActiveServlet.java| 95 ++
 .../federation/router/IsRouterActiveServlet.java   | 37 +
 .../server/federation/router/RouterHttpServer.java |  9 ++
 .../src/site/markdown/HDFSRouterFederation.md  |  2 +-
 .../server/namenode/IsNameNodeActiveServlet.java   | 33 
 .../hdfs/server/namenode/NameNodeHttpServer.java   |  3 +
 .../site/markdown/HDFSHighAvailabilityWithQJM.md   |  8 ++
 .../IsResourceManagerActiveServlet.java| 38 +
 .../server/resourcemanager/ResourceManager.java|  5 ++
 .../resourcemanager/webapp/RMWebAppFilter.java |  3 +-
 .../src/site/markdown/ResourceManagerHA.md |  5 ++
 12 files changed, 307 insertions(+), 2 deletions(-)
{code}
that commit has no changes to the cmd, whereas the merge commit does:
{code}
± git show --stat 1363eff69c3 | cat
commit 1363eff69c36c4f2085194b59a86370505cc00cd
Merge: 94d7f90e93b 9af96d4ed4b
Author: Konstantin V Shvachko 
Date:   Mon Sep 17 17:39:11 2018 -0700

Merge commit '9af96d4ed4b6f80d3ca53a2b003d2ef768650dd4' into HDFS-12943

# Conflicts:
#   
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md

 .../hadoop-common/src/main/bin/start-all.cmd   | 104 ++---
 .../hadoop-common/src/main/bin/stop-all.cmd| 104 ++---
 .../org/apache/hadoop/http/IsActiveServlet.java|  71 ++
 .../apache/hadoop/http/TestIsActiveServlet.java|  95 +++
 .../federation/router/IsRouterActiveServlet.java   |  37 
 .../server/federation/router/RouterHttpServer.java |   9 ++
 .../src/site/markdown/HDFSRouterFederation.md  |   2 +-
 .../hadoop-hdfs/src/main/bin/hdfs-config.cmd   |  86 -
 .../hadoop-hdfs/src/main/bin/start-dfs.cmd |  82 
 .../hadoop-hdfs/src/main/bin/stop-dfs.cmd  |  82 
 .../server/namenode/IsNameNodeActiveServlet.java   |  33 +++
 .../hdfs/server/namenode/NameNodeHttpServer.java   |   3 +
 .../site/markdown/HDFSHighAvailabilityWithQJM.md   |   8 ++
 hadoop-mapreduce-project/bin/mapred-config.cmd |  86 -
 hadoop-tools/hadoop-streaming/src/test/bin/cat.cmd |  36 +++
 .../hadoop-streaming/src/test/bin/xargs_cat.cmd|  36 +++
 hadoop-yarn-project/hadoop-yarn/bin/start-yarn.cmd |  94 +--
 hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.cmd  |  94 +--
 .../IsResourceManagerActiveServlet.java|  38 
 .../server/resourcemanager/ResourceManager.java|   5 +
 .../resourcemanager/webapp/RMWebAppFilter.java |   3 +-
 .../src/site/markdown/ResourceManagerHA.md |   5 +
 22 files changed, 709 insertions(+), 404 deletions(-)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-447) separate ozone-dist and hadoop-dist projects with real classpath separation

2018-09-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622761#comment-16622761
 ] 

Bharat Viswanadham edited comment on HDDS-447 at 9/20/18 9:56 PM:
--

Thank You, [~elek] for the patch. 

I am able to compile now. 

Few changes, I have not understood why it is done.
{quote}The classpath of 'ozone fs' is solved in a more easy way. I just added 
the ozonefs project as a dependency to the tools. Now the classpath of the 
tools project could be used for all the tools (ozone scmcli, ozone fs, ...). 
But it introduced a circular dependency. I fixed it with moving 4 test classes 
to the tools. (which also helped to get the classes and the test classes in the 
same projects).
{quote}
1. Why the above mentioned change is needed and why moving the test files is 
done?

2. Now, we don't have a separate folder for each project like hdds,ozone in 
ozone tar ball. We have all tozone-related jars in share/ozone, but for each 
component, as we have generated classpath, during running that component, we 
use only jars needed for that component, by adding only the jars required for 
that component from the generated classpath file?

3. And do we need empty folder share/hadoop/ozone and share/hadoop/hdds?

4. In bin/ozone, Line 86: ";;a", this should be changed to ";;". Because of 
this when I try to start docker clusters, they are not coming up. (Thanks to 
smoke tests, which help to figure out immediately) 

 

 


was (Author: bharatviswa):
Thank You, [~elek] for the patch. 

I am able to compile now. 

Few changes, I have not understood why it is done.
{quote}The classpath of 'ozone fs' is solved in a more easy way. I just added 
the ozonefs project as a dependency to the tools. Now the classpath of the 
tools project could be used for all the tools (ozone scmcli, ozone fs, ...). 
But it introduced a circular dependency. I fixed it with moving 4 test classes 
to the tools. (which also helped to get the classes and the test classes in the 
same projects).
{quote}
1. Why the above mentioned change is needed and why moving the test files is 
done?

2. Now, we don't have a separate folder for each project like hdds,ozone in 
ozone tar ball. We have all tozone-related jars in share/ozone, but for each 
component, as we have generated classpath, during running that component, we 
use only jars needed for that component, by adding only the jars required for 
that component from the generated classpath file?

3. And do we need empty folder share/hadoop/ozone and share/hadoop/hdds? 

 

 

> separate ozone-dist and hadoop-dist projects with real classpath separation
> ---
>
> Key: HDDS-447
> URL: https://issues.apache.org/jira/browse/HDDS-447
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-447-ozone-0.2.001.patch, HDDS-447.003.patch, 
> HDDS-447.004.patch, HDDS-447.005.patch
>
>
> Currently we have the same hadoop-dist project to create both the ozone and 
> and the hadoop distribution.
> To decouple ozone and hadoop build it would be great to create two different 
> dist project.
> The hadoop-dist should be cloned to hadoop-ozone/dist and from 
> hadoop-dist/pom.xml we can remove the hdds/ozone related items and from 
> hadoop-ozone/dist/pom.xml we can remove the core hadoop related part.
> An other issue with the current distribution schema is the lack of real 
> classpath separation. 
> The current hadoop distribution model is defined in the hadoop-project-dist 
> which is parent of all the component projects and the output of the 
> distribution generation will be copied by the dist-layout-stitching. There is 
> no easy way to use command specific classpath as the classpath is defined in 
> component level (hdfs/yarn/mapreduce).
> With this approach we will have a lot of unnecessary dependencies on the 
> classpath (which were not on the classpath at the time of the unit tests) and 
> it's not possible (as an example) use different type of jaxrs stack for 
> different services (s3gateway vs scm).
> As a simplified but more effective approach I propose to use the following 
> method:
> 1. don't use hadoop-project-dist for ozone projects any more
> 2. During the build generate a classpath descriptor (with the 
> dependency:build-classpath maven plugin/goal) for all the projects
> 3. During the distribution copy all the required dependencies (with 
> dependency:copy maven plugin/goal) to a lib folder (share/ozone/lib)
> 4. During the distribution copy all the classpath descriptors to the 
> classpath folder (share/ozone/classpath)
> 5. Put only the required jar files to the classpath with reading the 
> classpath descriptor 

[jira] [Commented] (HDDS-447) separate ozone-dist and hadoop-dist projects with real classpath separation

2018-09-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622761#comment-16622761
 ] 

Bharat Viswanadham commented on HDDS-447:
-

Thank You, [~elek] for the patch. 

I am able to compile now. 

Few changes, I have not understood why it is done.
{quote}The classpath of 'ozone fs' is solved in a more easy way. I just added 
the ozonefs project as a dependency to the tools. Now the classpath of the 
tools project could be used for all the tools (ozone scmcli, ozone fs, ...). 
But it introduced a circular dependency. I fixed it with moving 4 test classes 
to the tools. (which also helped to get the classes and the test classes in the 
same projects).
{quote}
1. Why the above mentioned change is needed and why moving the test files is 
done?

2. Now, we don't have a separate folder for each project like hdds,ozone in 
ozone tar ball. We have all tozone-related jars in share/ozone, but for each 
component, as we have generated classpath, during running that component, we 
use only jars needed for that component, by adding only the jars required for 
that component from the generated classpath file?

3. And do we need empty folder share/hadoop/ozone and share/hadoop/hdds? 

 

 

> separate ozone-dist and hadoop-dist projects with real classpath separation
> ---
>
> Key: HDDS-447
> URL: https://issues.apache.org/jira/browse/HDDS-447
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-447-ozone-0.2.001.patch, HDDS-447.003.patch, 
> HDDS-447.004.patch, HDDS-447.005.patch
>
>
> Currently we have the same hadoop-dist project to create both the ozone and 
> and the hadoop distribution.
> To decouple ozone and hadoop build it would be great to create two different 
> dist project.
> The hadoop-dist should be cloned to hadoop-ozone/dist and from 
> hadoop-dist/pom.xml we can remove the hdds/ozone related items and from 
> hadoop-ozone/dist/pom.xml we can remove the core hadoop related part.
> An other issue with the current distribution schema is the lack of real 
> classpath separation. 
> The current hadoop distribution model is defined in the hadoop-project-dist 
> which is parent of all the component projects and the output of the 
> distribution generation will be copied by the dist-layout-stitching. There is 
> no easy way to use command specific classpath as the classpath is defined in 
> component level (hdfs/yarn/mapreduce).
> With this approach we will have a lot of unnecessary dependencies on the 
> classpath (which were not on the classpath at the time of the unit tests) and 
> it's not possible (as an example) use different type of jaxrs stack for 
> different services (s3gateway vs scm).
> As a simplified but more effective approach I propose to use the following 
> method:
> 1. don't use hadoop-project-dist for ozone projects any more
> 2. During the build generate a classpath descriptor (with the 
> dependency:build-classpath maven plugin/goal) for all the projects
> 3. During the distribution copy all the required dependencies (with 
> dependency:copy maven plugin/goal) to a lib folder (share/ozone/lib)
> 4. During the distribution copy all the classpath descriptors to the 
> classpath folder (share/ozone/classpath)
> 5. Put only the required jar files to the classpath with reading the 
> classpath descriptor 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622742#comment-16622742
 ] 

Hadoop QA commented on HDFS-13876:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 22 new + 425 unchanged - 0 fixed = 447 total (was 425) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
57s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13876 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940662/HDFS-13876.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70b7d5e95b28 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 524f7cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25112/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25112/testReport/ |
| Max. process+thread count | 645 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25112/console |
| Powered by | 

[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-20 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622727#comment-16622727
 ] 

Ajay Kumar commented on HDDS-401:
-

[~GeLiXin] thanks for the patch. Overall patch looks good. Few comments and 
questions:
TestDeadNodeHandler
* testStatisticsUpdate: Shall we emit the actual {{SCMEvents.DEAD_NODE}} event 
for datanode1 (L181). This will require EventQueue field in test class and 
registration of deadNodeHandler as handler for  {{SCMEvents.DEAD_NODE}} event 
in setup function. {code} .addHandler(SCMEvents.DEAD_NODE, 
deadNodeHandler);{code}

SCMNodeManager#processDeadNode
Not sure what is right answer to this but wanted to raise this as a question. 
With current approach if somebody tries to get NodeStat for a dead node, they 
will get a NodeNotFoundException. Which may imply node doesn't exist in 
cluster. A dead datanode is not decommissioned or removed from cluster. So that 
assumption is not correct. Wondering if setting stats for dead node to 0 is 
better than removing its entry all together.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.

2018-09-20 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622716#comment-16622716
 ] 

Chao Sun commented on HDFS-13873:
-

[~vagarychen]: feel free to take it - I was waiting for HDFS-13749 to be 
resolved since this depends on that.
Our internal implementation is quite simple: we just have a staleness threshold 
based on time (the txid hasn't been updated in past X mins). 

> ObserverNode should reject read requests when it is too far behind.
> ---
>
> Key: HDFS-13873
> URL: https://issues.apache.org/jira/browse/HDFS-13873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
>
> Add a server-side threshold for ObserverNode to reject read requests when it 
> is too far behind.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-09-20 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Status: Patch Available  (was: In Progress)

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-09-20 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Attachment: HDFS-1915.003.patch

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, 
> HDFS-1915.003.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13927) TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA

2018-09-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622711#comment-16622711
 ] 

Íñigo Goiri commented on HDFS-13927:


8.1 seconds this time and other than that clean.
The failed unit test are the usual suspects (there was some investigation for 
TestBlockReaderLocal but still pending).
+1 on  [^HDFS-13927-02.patch]
Committing soon.

> TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA
> 
>
> Key: HDFS-13927
> URL: https://issues.apache.org/jira/browse/HDFS-13927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-13927-01.patch, HDFS-13927-02.patch
>
>
> Remove the explicit wait in the test for failed datanode with exact time 
> required for the process to confirm the status.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1915) fuse-dfs does not support append

2018-09-20 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-1915:
---
Status: In Progress  (was: Patch Available)

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch
>
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-447) separate ozone-dist and hadoop-dist projects with real classpath separation

2018-09-20 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-447:
--
Attachment: HDDS-447.005.patch

> separate ozone-dist and hadoop-dist projects with real classpath separation
> ---
>
> Key: HDDS-447
> URL: https://issues.apache.org/jira/browse/HDDS-447
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-447-ozone-0.2.001.patch, HDDS-447.003.patch, 
> HDDS-447.004.patch, HDDS-447.005.patch
>
>
> Currently we have the same hadoop-dist project to create both the ozone and 
> and the hadoop distribution.
> To decouple ozone and hadoop build it would be great to create two different 
> dist project.
> The hadoop-dist should be cloned to hadoop-ozone/dist and from 
> hadoop-dist/pom.xml we can remove the hdds/ozone related items and from 
> hadoop-ozone/dist/pom.xml we can remove the core hadoop related part.
> An other issue with the current distribution schema is the lack of real 
> classpath separation. 
> The current hadoop distribution model is defined in the hadoop-project-dist 
> which is parent of all the component projects and the output of the 
> distribution generation will be copied by the dist-layout-stitching. There is 
> no easy way to use command specific classpath as the classpath is defined in 
> component level (hdfs/yarn/mapreduce).
> With this approach we will have a lot of unnecessary dependencies on the 
> classpath (which were not on the classpath at the time of the unit tests) and 
> it's not possible (as an example) use different type of jaxrs stack for 
> different services (s3gateway vs scm).
> As a simplified but more effective approach I propose to use the following 
> method:
> 1. don't use hadoop-project-dist for ozone projects any more
> 2. During the build generate a classpath descriptor (with the 
> dependency:build-classpath maven plugin/goal) for all the projects
> 3. During the distribution copy all the required dependencies (with 
> dependency:copy maven plugin/goal) to a lib folder (share/ozone/lib)
> 4. During the distribution copy all the classpath descriptors to the 
> classpath folder (share/ozone/classpath)
> 5. Put only the required jar files to the classpath with reading the 
> classpath descriptor 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-526) Clean previous chill mode code from SCMNodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-526:
---

 Summary: Clean previous chill mode code from SCMNodeManager. 
 Key: HDDS-526
 URL: https://issues.apache.org/jira/browse/HDDS-526
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Ajay Kumar


Clean previous chill mode code from SCMNodeManager. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622681#comment-16622681
 ] 

Siyao Meng commented on HDFS-13876:
---

[~shashikant] Submitted patch rev 003 to add testDisallowSnapshotException() to 
both class TestHttpFsServer and BaseTestHttpFSWith.

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch, 
> HDFS-13876.003.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13876:
--
Attachment: HDFS-13876.003.patch
Status: Patch Available  (was: In Progress)

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.0.3, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch, 
> HDFS-13876.003.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13876:
--
Status: In Progress  (was: Patch Available)

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.0.3, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13749:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch, HDFS-13749-HDFS-12943.004.patch, 
> HDFS-13749-HDFS-12943.005.patch, HDFS-13749-HDFS-12943.006.patch, 
> HDFS-13749-HDFS-12943.007.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13749:
---
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-12943

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch, HDFS-13749-HDFS-12943.004.patch, 
> HDFS-13749-HDFS-12943.005.patch, HDFS-13749-HDFS-12943.006.patch, 
> HDFS-13749-HDFS-12943.007.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622666#comment-16622666
 ] 

Erik Krogen commented on HDFS-13749:


Funny that my v007 precommit beat your v006 to posting :)
{quote}
BTW: you may want to remove the extra spaces between "})" and "{" - I added 
them in a failed attempt.
{quote}
Yeah, I actually based v007 off of v005, so no extra space. Thanks for the 
heads up.

I just committed v007 to HDFS-12943. Thanks for the contribution [~csun]!

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch, HDFS-13749-HDFS-12943.004.patch, 
> HDFS-13749-HDFS-12943.005.patch, HDFS-13749-HDFS-12943.006.patch, 
> HDFS-13749-HDFS-12943.007.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622664#comment-16622664
 ] 

Hadoop QA commented on HDFS-13749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
56s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
19s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
22 unchanged - 13 fixed = 23 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940627/HDFS-13749-HDFS-12943.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 48aefe50e5a7 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-514) Clean Unregister JMX upon SCMConnectionManager#close

2018-09-20 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622660#comment-16622660
 ] 

Hudson commented on HDDS-514:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15033 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15033/])
HDDS-514. Clean Unregister JMX upon SCMConnectionManager#close. (aengineer: rev 
524f7cd354e0683c9ec61fdbce344ef79b841728)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/SCMConnectionManager.java


> Clean Unregister JMX upon SCMConnectionManager#close
> 
>
> Key: HDDS-514
> URL: https://issues.apache.org/jira/browse/HDDS-514
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 0.2.2
>
> Attachments: HDDS-514.001.patch
>
>
> Have seen this during unit testing, this ticket is opened to safely 
> unregister MBeans for SCMConnectionManager.
>  
> {code}
> 2018-09-19 22:18:14,059 WARN util.MBeans (MBeans.java:unregister(145)) - 
> Error unregistering Hadoop:service=HddsDatanode,name=SCMConnectionManager-5 
> javax.management.InstanceNotFoundException: 
> Hadoop:service=HddsDatanode,name=SCMConnectionManager-5 at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
>  at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
>  at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:143) at 
> org.apache.hadoop.ozone.container.common.statemachine.SCMConnectionManager.close(SCMConnectionManager.java:194)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.close(DatanodeStateMachine.java:232)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.stopDaemon(DatanodeStateMachine.java:343)
>  at 
> org.apache.hadoop.ozone.HddsDatanodeService.stop(HddsDatanodeService.java:211)
>  at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:237)
>  at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:263)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.TestNodeFailure.testPipelineFail(TestNodeFailure.java:119)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 
> 

[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622656#comment-16622656
 ] 

Hadoop QA commented on HDFS-13749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
15s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
30s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 17 
unchanged - 17 fixed = 17 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940635/HDFS-13749-HDFS-12943.007.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6bb2a623c96d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / c377e3c |
| 

[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622654#comment-16622654
 ] 

Hadoop QA commented on HDFS-13876:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 20 new + 424 unchanged - 0 fixed = 444 total (was 424) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
9s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13876 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940651/HDFS-13876.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bc8d6563139f 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 096a716 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25111/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25111/testReport/ |
| Max. process+thread count | 650 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25111/console |
| Powered by | 

[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622652#comment-16622652
 ] 

Xiaoyu Yao commented on HDDS-370:
-

Thanks [~ajayydv] for working on this. Patch v4 LGTM. I have one suggestion on 
the unit test:

NIT: rename TestRpcClient->TestChillMode

Can we consolidate chillmode unit test by moving 
TestStorageContainerManger#testSCMChillMode() and testSCMChillModeRestrictedOp 
into TestChillMode.java along with the new test case in this patch?

 

> Add and implement following functions in SCMClientProtocolServer
> 
>
> Key: HDDS-370
> URL: https://issues.apache.org/jira/browse/HDDS-370
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-370.00.patch, HDDS-370.01.patch, HDDS-370.02.patch, 
> HDDS-370.03.patch, HDDS-370.04.patch
>
>
> Add and implement following functions in SCMClientProtocolServer
> # isScmInChillMode
> # forceScmExitChillMode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.

2018-09-20 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622645#comment-16622645
 ] 

Chen Liang commented on HDFS-13873:
---

[~csun] any updates/plans for this? I saw you are busy with two other Jiras, I 
can help on this one if you like :).

Either way, I'm curious about what is the current internal implementation at 
Uber? I was syncing with Konstantin, we were planning to do this based on state 
id. But the threshold for rejection should probably be based on some runtime 
moving average (e.g. the number of txid being processed in past X mins). Any 
thoughts on this?

> ObserverNode should reject read requests when it is too far behind.
> ---
>
> Key: HDFS-13873
> URL: https://issues.apache.org/jira/browse/HDFS-13873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
>
> Add a server-side threshold for ObserverNode to reject read requests when it 
> is too far behind.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13879) FileSystem: Should allowSnapshot() and disallowSnapshot() be part of it?

2018-09-20 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622628#comment-16622628
 ] 

Siyao Meng commented on HDFS-13879:
---

[~jojochuang] If we are also adding getSnapshotDiffReport(), what about 
getSnapshottableDirectoryList()?

> FileSystem: Should allowSnapshot() and disallowSnapshot() be part of it?
> 
>
> Key: HDFS-13879
> URL: https://issues.apache.org/jira/browse/HDFS-13879
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Siyao Meng
>Priority: Major
>
> I wonder whether we should add allowSnapshot() and disallowSnapshot() to 
> FileSystem abstract class.
> I think we should because createSnapshot(), renameSnapshot() and 
> deleteSnapshot() are already part of it.
> Any reason why we don't want to do this?
> Thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-514) Clean Unregister JMX upon SCMConnectionManager#close

2018-09-20 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-514:
--
   Resolution: Fixed
Fix Version/s: 0.2.2
   Status: Resolved  (was: Patch Available)

[~xyao] Thanks for the contribution. I have committed this to the trunk.

> Clean Unregister JMX upon SCMConnectionManager#close
> 
>
> Key: HDDS-514
> URL: https://issues.apache.org/jira/browse/HDDS-514
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 0.2.2
>
> Attachments: HDDS-514.001.patch
>
>
> Have seen this during unit testing, this ticket is opened to safely 
> unregister MBeans for SCMConnectionManager.
>  
> {code}
> 2018-09-19 22:18:14,059 WARN util.MBeans (MBeans.java:unregister(145)) - 
> Error unregistering Hadoop:service=HddsDatanode,name=SCMConnectionManager-5 
> javax.management.InstanceNotFoundException: 
> Hadoop:service=HddsDatanode,name=SCMConnectionManager-5 at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
>  at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
>  at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
>  at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:143) at 
> org.apache.hadoop.ozone.container.common.statemachine.SCMConnectionManager.close(SCMConnectionManager.java:194)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.close(DatanodeStateMachine.java:232)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.stopDaemon(DatanodeStateMachine.java:343)
>  at 
> org.apache.hadoop.ozone.HddsDatanodeService.stop(HddsDatanodeService.java:211)
>  at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:237)
>  at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:263)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.TestNodeFailure.testPipelineFail(TestNodeFailure.java:119)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) 
> 2018-09-19 22:18:14,059 INFO ozoneimpl.OzoneContainer 
> (OzoneContainer.java:stop(149)) - Attempting to stop container services.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer

2018-09-20 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622626#comment-16622626
 ] 

Chen Liang commented on HDFS-13924:
---

Thanks for the update [~csun]! Just to add to my previous comment: What I was 
thinking of was to make the retry a more uniformed fashion. Specifically, in 
the ideal situation, I think on client side, it should always only be 
ProxyProvider that handles NN redirecting logic. To this extend, I would think 
of server side a better place to handle this compared to DFSOutputStream: 
server side throws exception, then ProxyProvider does the redirecting properly, 
so DFSInputStream is hidden from the retry and doesn't need to do anything in 
addition.

So IMO, the better way may be, just like you mentioned, creating a new 
exception, say, ObserverOperationFailException, for all the situations where 
Observer can not successfully handle a whatever request and worth retry active, 
just throw this exception. Whenever ObserverProxyProvider sees this exception, 
try again with active. Something along this line.

> Handle BlockMissingException when reading from observer
> ---
>
> Key: HDFS-13924
> URL: https://issues.apache.org/jira/browse/HDFS-13924
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Priority: Major
>
> Internally we found that reading from ObserverNode may result to 
> {{BlockMissingException}}. This may happen when the observer sees a smaller 
> number of DNs than active (maybe due to communication issue with those DNs), 
> or (we guess) late block reports from some DNs to the observer. This error 
> happens in 
> [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846],
>  when no valid DN can be found for the {{LocatedBlock}} got from the NN side.
> One potential solution (although a little hacky) is to ask the 
> {{DFSInputStream}} to retry active when this happens. The retry logic already 
> present in the code - we just have to dynamically set a flag to ask the 
> {{ObserverReadProxyProvider}} try active in this case.
> cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13543) when datanode have some unmounted disks, disk balancer should skip these disks not throw IllegalArgumentException

2018-09-20 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDFS-13543:
-

Assignee: Shweta

> when datanode have some unmounted disks, disk balancer should skip these 
> disks not throw IllegalArgumentException
> -
>
> Key: HDFS-13543
> URL: https://issues.apache.org/jira/browse/HDFS-13543
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.0
>Reporter: luoge123
>Assignee: Shweta
>Priority: Major
>
> when datanode has an unmounted disk, disk balancer get disk capacity from 
> report is zero, this will case getVolumeInfoFromStorageReports throw 
> IllegalArgumentException
> {code:java}
> java.lang.IllegalArgumentException
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
> at 
> org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume.setUsed(DiskBalancerVolume.java:268)
> at 
> org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getVolumeInfoFromStorageReports(DBNameNodeConnector.java:148)
> at 
> org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getNodes(DBNameNodeConnector.java:90)
> at 
> org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster.readClusterInfo(DiskBalancerCluster.java:133)
> at 
> org.apache.hadoop.hdfs.server.diskbalancer.command.Command.readClusterInfo(Command.java:123)
> at 
> org.apache.hadoop.hdfs.server.diskbalancer.command.ReportCommand.execute(ReportCommand.java:74)
> at 
> org.apache.hadoop.hdfs.tools.DiskBalancerCLI.dispatch(DiskBalancerCLI.java:468)
> at org.apache.hadoop.hdfs.tools.DiskBalancerCLI.run(DiskBalancerCLI.java:183)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.hdfs.tools.DiskBalancerCLI.main(DiskBalancerCLI.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-394) Rename *Key Apis in DatanodeContainerProtocol to *Block apis

2018-09-20 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622615#comment-16622615
 ] 

Hudson commented on HDDS-394:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15032 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15032/])
HDDS-394. Rename *Key Apis in DatanodeContainerProtocol to *Block apis. 
(aengineer: rev 096a7160803494219581c067dfcdb67d2bd0bcdb)
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/helpers/TestKeyData.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/helpers/TestBlockData.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/OpenContainerBlockMap.java
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/interfaces/KeyManager.java
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyUtils.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/BlockData.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/package-info.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestBlockManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/SmallFileUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueBlockIterator.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/BlockUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rest/TestOzoneRestClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueBlockIterator.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestCloseContainerHandler.java
* (delete) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java
* (edit) 

[jira] [Commented] (HDFS-13927) TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622596#comment-16622596
 ] 

Hadoop QA commented on HDFS-13927:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13927 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940625/HDFS-13927-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9fcd0df0b711 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 429a07e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25108/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25108/testReport/ |
| Max. process+thread count | 2985 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25108/console |

[jira] [Assigned] (HDDS-338) ozoneFS allows to create file key and directory key with same keyname

2018-09-20 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDDS-338:
---

Assignee: Hanisha Koneru

> ozoneFS allows to create file key and directory key with same keyname
> -
>
> Key: HDDS-338
> URL: https://issues.apache.org/jira/browse/HDDS-338
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Major
>
> steps taken :
> --
> 1. created a directory through ozoneFS interface.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -mkdir /temp1/
> 2018-08-08 13:50:26 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -ls /
> 2018-08-08 14:09:59 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> drwxrwxrwx - 0 2018-08-08 13:51 /temp1{noformat}
> 2. create a new key with name 'temp1'  at same bucket.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/temp1 
> -file /etc/passwd
> 2018-08-08 14:10:34 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 2:10:36 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_3.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){noformat}
> Observed that there are multiple entries of 'temp1' when ozone fs -ls command 
> is run. Also . both the entries are considered as file . '/temp1' directory 
> is not visible anymore.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -ls /
> 2018-08-08 14:10:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 2 items
> -rw-rw-rw- 1 

[jira] [Commented] (HDFS-13830) Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting snasphottable directory list

2018-09-20 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622579#comment-16622579
 ] 

Wei-Chiu Chuang commented on HDFS-13830:


+1 will commit later today

> Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting 
> snasphottable directory list
> 
>
> Key: HDFS-13830
> URL: https://issues.apache.org/jira/browse/HDFS-13830
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13830.branch-3.0.001.patch, 
> HDFS-13830.branch-3.0.002.patch, HDFS-13830.branch-3.0.003.patch, 
> HDFS-13830.branch-3.0.004.patch
>
>
> HDFS-13141 conflicts with 3.0.3 because of interface change in HdfsFileStatus.
> This Jira aims to backport the WebHDFS getSnapshottableDirListing() support 
> to branch-3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-522) Implement PutBucket REST endpoint

2018-09-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-522:
---

Assignee: Bharat Viswanadham

> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> The create bucket creates a bucket for the give volume.
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622563#comment-16622563
 ] 

Siyao Meng commented on HDFS-13876:
---

[~shashikant] Sure. In that case it should be expecting a SnapshotException. 
Working on it.

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-391) Simplify AuditMessage structure to make audit logging easier to use

2018-09-20 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-391 started by Dinesh Chitlangia.
--
> Simplify AuditMessage structure to make audit logging easier to use
> ---
>
> Key: HDDS-391
> URL: https://issues.apache.org/jira/browse/HDDS-391
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> In HDDS-376 a customer AuditMessage structure was created for use in Audit 
> Logging.
> This Jira proposes to incorporate suggestive improvements from [~ajayydv].
>  * AuditMessage should encapsulate log level, audit status, message and 
> exception.
>  * AuditMessage should use the | delimited instead of space. This will 
> specially be useful when AuditParser is completed as part of HDDS-393



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-394) Rename *Key Apis in DatanodeContainerProtocol to *Block apis

2018-09-20 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-394:
--
   Resolution: Fixed
Fix Version/s: 0.2.2
   Status: Resolved  (was: Patch Available)

[~dineshchitlangia] Thanks for the contribution. [~arpitagarwal] Thanks for the 
comments. I have committed this patch to trunk. While committing I have fixed 
some JavaDoc issues with the help of [~dineshchitlangia].

> Rename *Key Apis in DatanodeContainerProtocol to *Block apis
> 
>
> Key: HDDS-394
> URL: https://issues.apache.org/jira/browse/HDDS-394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.2
>
> Attachments: HDDS-394.001.patch, HDDS-394.002.patch, 
> HDDS-394.003.patch, HDDS-394.004.patch, HDDS-394.005.patch, 
> HDDS-394.006.patch, proto.diff
>
>
> All the block apis in client datanode interaction are named *key apis(e.g. 
> PutKey), This can be renamed to *Block apis. (e.g. PutBlock).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622542#comment-16622542
 ] 

Siyao Meng commented on HDFS-13876:
---

Thanks for your comments!
[~ljain] I have uploaded patch rev 002 to make the code cleaner as you have 
mentioned.
Plus in TestHttpFsServer I'm now reusing createDirWithHttp() to reduce 
redundant code in the test cases.
[~jojochuang] I have changed the log message per your suggestion. Looks great.

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13876:
--
Attachment: HDFS-13876.002.patch
Status: Patch Available  (was: In Progress)

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.0.3, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13876:
--
Status: In Progress  (was: Patch Available)

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.0.3, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-20 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622527#comment-16622527
 ] 

Arpit Agarwal commented on HDFS-13893:
--

Thanks for this improvement [~ljain].

A more robust fix may be to look for unrecognized options in the returned 
{{CommandLine}} object via {{CommandLine.getArgs}}.

{code}
/** 
 * Retrieve any left-over non-recognized options and arguments
 *
 * @return remaining items passed in but not parsed as an array
 */
public String[] getArgs()
{code}



> DiskBalancer: no validations for Disk balancer commands 
> 
>
> Key: HDFS-13893
> URL: https://issues.apache.org/jira/browse/HDFS-13893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13893.001.patch
>
>
> {{Scenario:-}}
>  
>  1 Run the Disk Balancer commands with extra arguments passing  
> {noformat} 
> hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
> *sgfsdgfs*
> 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
> hostname:50077
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
> fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
> hostname:50077 took 23 ms
> 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
> 2018-08-31 14:57:35,457 INFO command.Command: 
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> Writing plan to:
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> {noformat} 
> Expected Output:- 
> =
> Disk balancer commands should be fail if we pass any invalid arguments or 
> extra arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff not completely implemented for supporting WebHdfs

2018-09-20 Thread Xun REN (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622521#comment-16622521
 ] 

Xun REN commented on HDFS-13916:


Hi [~jojochuang],

No problem for me to separate this feature into two JIRAs. So I created another 
one here  : https://issues.apache.org/jira/browse/HADOOP-15777

 

And for this JIRA, I will just continue to complet it with more unit tests.

> Distcp SnapshotDiff not completely implemented for supporting WebHdfs
> -
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: 
> https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility 
> to make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622519#comment-16622519
 ] 

Shashikant Banerjee commented on HDFS-13876:


Thanks [~smeng] for working on this, In addition to [~ljain] and [~jojochuang] 
comments,

Can we also add a test case where we make a directory snapshottable, create few 
snapshots and try to disallow snapshots and verify the correct behaviour?

 

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer

2018-09-20 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622487#comment-16622487
 ] 

Chao Sun commented on HDFS-13924:
-

[~xkrogen] brought up a good point in 
[HDFS-13898|https://issues.apache.org/jira/browse/HDFS-13898?focusedCommentId=16622255=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16622255]:
 if we throw exception from the server side, it may lead to frequent 
re-scanning of observers from the ORPP side. Instead we can throw a special 
exception and make ORPP to directly retry active in this case.

> Handle BlockMissingException when reading from observer
> ---
>
> Key: HDFS-13924
> URL: https://issues.apache.org/jira/browse/HDFS-13924
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Priority: Major
>
> Internally we found that reading from ObserverNode may result to 
> {{BlockMissingException}}. This may happen when the observer sees a smaller 
> number of DNs than active (maybe due to communication issue with those DNs), 
> or (we guess) late block reports from some DNs to the observer. This error 
> happens in 
> [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846],
>  when no valid DN can be found for the {{LocatedBlock}} got from the NN side.
> One potential solution (although a little hacky) is to ask the 
> {{DFSInputStream}} to retry active when this happens. The retry logic already 
> present in the code - we just have to dynamically set a flag to ask the 
> {{ObserverReadProxyProvider}} try active in this case.
> cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-455) genconf tool must use picocli

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622486#comment-16622486
 ] 

Hadoop QA commented on HDDS-455:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/docs {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} docs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940631/HDDS-455.002.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dfb9fd303d24 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Comment Edited] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622482#comment-16622482
 ] 

Wei-Chiu Chuang edited comment on HDFS-13876 at 9/20/18 6:25 PM:
-

Thanks for the patch. In addition to [~ljain]'s comment here are a few more:

# Make sense to use PUT method for allowSnapshot/disallowSnapshot
# Like HDFS-13916, once HADOOP-15691 completes we should use PathCapabilities 
instead.
# We can improve this message "allowSnapshot is only supported on 
DistributedFileSystem".
Can we log "allowSnapshot is not supported for HttpFs on " + fs.getClass() + ". 
Please check your fs.defaultFS configuration"? This'll help make 
troubleshooting easier.


was (Author: jojochuang):
1. Make sense to use PUT method for allowSnapshot/disallowSnapshot
2. Like HDFS-13916, once HADOOP-15691 completes we should use PathCapabilities 
instead.
3. We can improve this message "allowSnapshot is only supported on 
DistributedFileSystem".
Can we log "allowSnapshot is not supported for HttpFs on " + fs.getClass() + ". 
Please check your fs.defaultFS configuration"? This'll help make 
troubleshooting easier.

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622482#comment-16622482
 ] 

Wei-Chiu Chuang commented on HDFS-13876:


1. Make sense to use PUT method for allowSnapshot/disallowSnapshot
2. Like HDFS-13916, once HADOOP-15691 completes we should use PathCapabilities 
instead.
3. We can improve this message "allowSnapshot is only supported on 
DistributedFileSystem".
Can we log "allowSnapshot is not supported for HttpFs on " + fs.getClass() + ". 
Please check your fs.defaultFS configuration"? This'll help make 
troubleshooting easier.

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622469#comment-16622469
 ] 

Chao Sun commented on HDFS-13749:
-

Oh cool! Thanks for the help [~xkrogen]! 
BTW: you may want to remove the extra spaces between "})" and "{" - I added 
them in a failed attempt.

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch, HDFS-13749-HDFS-12943.004.patch, 
> HDFS-13749-HDFS-12943.005.patch, HDFS-13749-HDFS-12943.006.patch, 
> HDFS-13749-HDFS-12943.007.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-512) update test.sh to remove robot framework & python-pip installation

2018-09-20 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622413#comment-16622413
 ] 

Elek, Marton commented on HDDS-512:
---

As I understood the chill mode threshold defines the percentage of the 
containers (fix me if I am wrong). And for a new cluster we have zero 
containers. We can either improve the cli with an additional option: 'ozone 
scmcli wait --chill-mode' or we need a meaningful chill mode for empty 
clusters.  

> update test.sh to remove robot framework & python-pip installation
> --
>
> Key: HDDS-512
> URL: https://issues.apache.org/jira/browse/HDDS-512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-512.001.patch
>
>
> update test.sh to remove robot framework & python-pip installation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13749:
---
Attachment: HDFS-13749-HDFS-12943.007.patch

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch, HDFS-13749-HDFS-12943.004.patch, 
> HDFS-13749-HDFS-12943.005.patch, HDFS-13749-HDFS-12943.006.patch, 
> HDFS-13749-HDFS-12943.007.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622389#comment-16622389
 ] 

Erik Krogen commented on HDFS-13749:


So for the checkstyle
{code}
proxyProvider = new ObserverReadProxyProvider(conf, nnURI,
ClientProtocol.class, new ClientHAProxyFactory() {
  @Override
  public ClientProtocol createProxy(Configuration conf,
  InetSocketAddress nnAddr, Class xface,
  UserGroupInformation ugi, boolean withRetries,
  AtomicBoolean fallbackToSimpleAuth) {
return proxyMap.get(nnAddr.toString());
  }
})  {
{code}
Since the {{ClientHAProxyFactory}} is within an argument list, there should be 
an extra level of indent... but currently it is such that the ending brace 
lines up with the top-level. I've attached a v007 patch with indentation that 
both the Hadoop checkstyle and my IntelliJ style agree is correct :) It also 
renames the parameter {{conf}} to {{config}} in the above snippet to avoid a 
field masking checkstyle warning that didn't start getting emitted until I 
fixed the other one.


> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch, HDFS-13749-HDFS-12943.004.patch, 
> HDFS-13749-HDFS-12943.005.patch, HDFS-13749-HDFS-12943.006.patch, 
> HDFS-13749-HDFS-12943.007.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-20 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622379#comment-16622379
 ] 

Ajay Kumar commented on HDDS-448:
-

[~GeLiXin] thanks for rebasing the patch. Patch LGTM. Few comments:
NodeStateMap
* L53/L270/L300/L316/: Replace "stat with stats"
* L270 Same as above
* getNodeStat: Do we need the read lock, since it is concurrentHashMap?
* getNodeStats: Shall we return a unmodifiable copy of map?
* setNodeStat/removeNodeStat: Do we need lock here?

NodeStateManager
*  L415/L450 Replace "stat with stats"
* L425 Lets propagate back the NodeNotFoundException.
* L432 Rephrase " a map contains all node stats." to "nodeStateMap" or "map 
with node stats"?
*

SCMNodeManager
L313: Shall we move put operation to L298?


> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-455) genconf tool must use picocli

2018-09-20 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-455:
---
Status: Open  (was: Patch Available)

> genconf tool must use picocli
> -
>
> Key: HDDS-455
> URL: https://issues.apache.org/jira/browse/HDDS-455
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.2.1
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-455.001.patch, HDDS-455.002.patch
>
>
> Like ozone shell, genconf tool should use picocli to be consistent with other 
> cli usage in the ozone world.
> Also replace the command 'output' with 'target' to make it more self 
> explanatory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-455) genconf tool must use picocli

2018-09-20 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-455:
---
Attachment: HDDS-455.002.patch
Status: Patch Available  (was: Open)

[~elek] Patch 002 addresses your review comments. Thanks.

> genconf tool must use picocli
> -
>
> Key: HDDS-455
> URL: https://issues.apache.org/jira/browse/HDDS-455
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.2.1
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-455.001.patch, HDDS-455.002.patch
>
>
> Like ozone shell, genconf tool should use picocli to be consistent with other 
> cli usage in the ozone world.
> Also replace the command 'output' with 'target' to make it more self 
> explanatory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13927) TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA

2018-09-20 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622365#comment-16622365
 ] 

Ajay Kumar commented on HDFS-13927:
---

[~ayushtkn], thanks for explanation. +1 pending jenkins

> TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA
> 
>
> Key: HDFS-13927
> URL: https://issues.apache.org/jira/browse/HDFS-13927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-13927-01.patch, HDFS-13927-02.patch
>
>
> Remove the explicit wait in the test for failed datanode with exact time 
> required for the process to confirm the status.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-447) separate ozone-dist and hadoop-dist projects with real classpath separation

2018-09-20 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-447:
--
Attachment: HDDS-447.004.patch

> separate ozone-dist and hadoop-dist projects with real classpath separation
> ---
>
> Key: HDDS-447
> URL: https://issues.apache.org/jira/browse/HDDS-447
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-447-ozone-0.2.001.patch, HDDS-447.003.patch, 
> HDDS-447.004.patch
>
>
> Currently we have the same hadoop-dist project to create both the ozone and 
> and the hadoop distribution.
> To decouple ozone and hadoop build it would be great to create two different 
> dist project.
> The hadoop-dist should be cloned to hadoop-ozone/dist and from 
> hadoop-dist/pom.xml we can remove the hdds/ozone related items and from 
> hadoop-ozone/dist/pom.xml we can remove the core hadoop related part.
> An other issue with the current distribution schema is the lack of real 
> classpath separation. 
> The current hadoop distribution model is defined in the hadoop-project-dist 
> which is parent of all the component projects and the output of the 
> distribution generation will be copied by the dist-layout-stitching. There is 
> no easy way to use command specific classpath as the classpath is defined in 
> component level (hdfs/yarn/mapreduce).
> With this approach we will have a lot of unnecessary dependencies on the 
> classpath (which were not on the classpath at the time of the unit tests) and 
> it's not possible (as an example) use different type of jaxrs stack for 
> different services (s3gateway vs scm).
> As a simplified but more effective approach I propose to use the following 
> method:
> 1. don't use hadoop-project-dist for ozone projects any more
> 2. During the build generate a classpath descriptor (with the 
> dependency:build-classpath maven plugin/goal) for all the projects
> 3. During the distribution copy all the required dependencies (with 
> dependency:copy maven plugin/goal) to a lib folder (share/ozone/lib)
> 4. During the distribution copy all the classpath descriptors to the 
> classpath folder (share/ozone/classpath)
> 5. Put only the required jar files to the classpath with reading the 
> classpath descriptor 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Attachment: HDFS-13749-HDFS-12943.006.patch

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch, HDFS-13749-HDFS-12943.004.patch, 
> HDFS-13749-HDFS-12943.005.patch, HDFS-13749-HDFS-12943.006.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-20 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622350#comment-16622350
 ] 

Chao Sun commented on HDFS-13749:
-

[~xkrogen]: could you take another look at this? The test 
{{TestBlockReaderLocal}} seems flaky as it succeeded on my local laptop. 
There's one check style error which I'm not sure what to do. IMO the  "}) {" 
indent style should be valid. Let me know your suggestion. 

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch, HDFS-13749-HDFS-12943.004.patch, 
> HDFS-13749-HDFS-12943.005.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622348#comment-16622348
 ] 

Hadoop QA commented on HDDS-325:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
53s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  0s{color} | {color:orange} root: The patch generated 6 new + 8 unchanged - 
0 fixed = 14 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
53s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  6s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m  1s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (HDDS-447) separate ozone-dist and hadoop-dist projects with real classpath separation

2018-09-20 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622347#comment-16622347
 ] 

Elek, Marton commented on HDDS-447:
---

Fixed in the latest patch (hopefully).

The story:

Now we should define the classpath with artifact names. But which classpath 
should be used for 'ozone fs'? I can't use the 'ozonefs' project as the hadoop 
dependencies are optional there (or more preciously: provided). 

For objectstore service we have a solution. We have two projects. In 
objectstore-service the core hadoop artifacts are provided (not included) but I 
created an other project (hadoop-ozone/datanode) which is the same as the 
objectstore-service + container-service but includes the hadoop jar files. So 
the classpath of datanode could be used for the hdds datanode.

The classpath of 'ozone fs' is solved in a more easy way. I just added the 
ozonefs project as a dependency to the tools. Now the classpath of the tools 
project could be used for all the tools (ozone scmcli, ozone fs, ...). But it 
introduced a circular dependency. I fixed it with moving 4 test classes to the 
tools. (which also helped to get the classes and the test classes in the same 
projects).

> separate ozone-dist and hadoop-dist projects with real classpath separation
> ---
>
> Key: HDDS-447
> URL: https://issues.apache.org/jira/browse/HDDS-447
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-447-ozone-0.2.001.patch, HDDS-447.003.patch, 
> HDDS-447.004.patch
>
>
> Currently we have the same hadoop-dist project to create both the ozone and 
> and the hadoop distribution.
> To decouple ozone and hadoop build it would be great to create two different 
> dist project.
> The hadoop-dist should be cloned to hadoop-ozone/dist and from 
> hadoop-dist/pom.xml we can remove the hdds/ozone related items and from 
> hadoop-ozone/dist/pom.xml we can remove the core hadoop related part.
> An other issue with the current distribution schema is the lack of real 
> classpath separation. 
> The current hadoop distribution model is defined in the hadoop-project-dist 
> which is parent of all the component projects and the output of the 
> distribution generation will be copied by the dist-layout-stitching. There is 
> no easy way to use command specific classpath as the classpath is defined in 
> component level (hdfs/yarn/mapreduce).
> With this approach we will have a lot of unnecessary dependencies on the 
> classpath (which were not on the classpath at the time of the unit tests) and 
> it's not possible (as an example) use different type of jaxrs stack for 
> different services (s3gateway vs scm).
> As a simplified but more effective approach I propose to use the following 
> method:
> 1. don't use hadoop-project-dist for ozone projects any more
> 2. During the build generate a classpath descriptor (with the 
> dependency:build-classpath maven plugin/goal) for all the projects
> 3. During the distribution copy all the required dependencies (with 
> dependency:copy maven plugin/goal) to a lib folder (share/ozone/lib)
> 4. During the distribution copy all the classpath descriptors to the 
> classpath folder (share/ozone/classpath)
> 5. Put only the required jar files to the classpath with reading the 
> classpath descriptor 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-09-20 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622343#comment-16622343
 ] 

Brahma Reddy Battula commented on HDFS-13790:
-

[~csun] thanks uploading patch. can you please handle check-style issues..?

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13790-branch-2.000.patch, 
> HDFS-13790-branch-2.9.000.patch, HDFS-13790-branch-2.9.001.patch, 
> HDFS-13790-branch-3.1.000.patch, HDFS-13790-branch-3.1.001.patch, 
> HDFS-13790.000.patch, HDFS-13790.001.patch
>
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13898) Throw retriable exception for getBlockLocations when ObserverNameNode is in safemode

2018-09-20 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622334#comment-16622334
 ] 

Chao Sun edited comment on HDFS-13898 at 9/20/18 4:51 PM:
--

Thanks [~xkrogen] for the explanation. I like the idea of having some 
particular exception to trigger ORPP to directly go to active - this could be 
useful for cases like HDFS-13924. For this particular scenario (observer in 
safemode) though, I think it's fine since I assume normally safemode only 
happens when the observer is starting up, which should not be quite common. 
Also, the safemode could last for quite a while and in my experience the chance 
of RPCs hitting this error is quite high, so might better to have all clients 
to re-direct to a different observer anyway.

Regarding the v002 patch, how about change the test name to 
{{testObserverNodeSafeModeWithBlockLocations}}? 
{{testObserverNodeSafeModeWithoutBlockLocations}} seems a little confusing to 
me since we are testing the safe mode case with {{getBlockLocations}} calls. 
About the {{HAState}} change, no particular reason except I wanted to make the 
lines shorter :) I'm perfectly fine to change it back.

Will fix the style issues too.


was (Author: csun):
Thanks [~xkrogen] for the explanation. I like the idea of having some 
particular exception to trigger ORPP to directly go to active - this could be 
useful for cases like HDFS-13924. For this particular scenario (observer in 
safemode) though, I think it's fine since I assume normally safemode only 
happens when the observer is starting up, which should not be quite common. 
Also, the safemode could last for quite a while and in my experience the chance 
of RPCs hitting this error is quite high, so might better to have all clients 
to re-direct to a different observer anyway.

Regarding the v002 patch, how about change the test name to 
{{testObserverNodeSafeModeWithBlockLocations}}? 
{{testObserverNodeSafeModeWithoutBlockLocations}} seems a little confusing to 
me since we are testing the safe mode case with {{getBlockLocations}} calls. 
About the {{HAState}} change, no particular reason except I wanted to make the 
lines shorter :) I'm perfectly fine to change it back.

 

Will fix the style issues too.

> Throw retriable exception for getBlockLocations when ObserverNameNode is in 
> safemode
> 
>
> Key: HDFS-13898
> URL: https://issues.apache.org/jira/browse/HDFS-13898
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13898-HDFS-12943.000.patch, 
> HDFS-13898-HDFS-12943.001.patch, HDFS-13898-HDFS-12943.002.patch
>
>
> When ObserverNameNode is in safe mode, {{getBlockLocations}} may throw safe 
> mode exception if the given file doesn't have any block yet. 
> {code}
> try {
>   checkOperation(OperationCategory.READ);
>   res = FSDirStatAndListingOp.getBlockLocations(
>   dir, pc, srcArg, offset, length, true);
>   if (isInSafeMode()) {
> for (LocatedBlock b : res.blocks.getLocatedBlocks()) {
>   // if safemode & no block locations yet then throw safemodeException
>   if ((b.getLocations() == null) || (b.getLocations().length == 0)) {
> SafeModeException se = newSafemodeException(
> "Zero blocklocations for " + srcArg);
> if (haEnabled && haContext != null &&
> haContext.getState().getServiceState() == 
> HAServiceState.ACTIVE) {
>   throw new RetriableException(se);
> } else {
>   throw se;
> }
>   }
> }
>   }
> {code}
> It only throws {{RetriableException}} for active NN so requests on observer 
> may just fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13898) Throw retriable exception for getBlockLocations when ObserverNameNode is in safemode

2018-09-20 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622334#comment-16622334
 ] 

Chao Sun commented on HDFS-13898:
-

Thanks [~xkrogen] for the explanation. I like the idea of having some 
particular exception to trigger ORPP to directly go to active - this could be 
useful for cases like HDFS-13924. For this particular scenario (observer in 
safemode) though, I think it's fine since I assume normally safemode only 
happens when the observer is starting up, which should not be quite common. 
Also, the safemode could last for quite a while and in my experience the chance 
of RPCs hitting this error is quite high, so might better to have all clients 
to re-direct to a different observer anyway.

Regarding the v002 patch, how about change the test name to 
{{testObserverNodeSafeModeWithBlockLocations}}? 
{{testObserverNodeSafeModeWithoutBlockLocations}} seems a little confusing to 
me since we are testing the safe mode case with {{getBlockLocations}} calls. 
About the {{HAState}} change, no particular reason except I wanted to make the 
lines shorter :) I'm perfectly fine to change it back.

 

Will fix the style issues too.

> Throw retriable exception for getBlockLocations when ObserverNameNode is in 
> safemode
> 
>
> Key: HDFS-13898
> URL: https://issues.apache.org/jira/browse/HDFS-13898
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13898-HDFS-12943.000.patch, 
> HDFS-13898-HDFS-12943.001.patch, HDFS-13898-HDFS-12943.002.patch
>
>
> When ObserverNameNode is in safe mode, {{getBlockLocations}} may throw safe 
> mode exception if the given file doesn't have any block yet. 
> {code}
> try {
>   checkOperation(OperationCategory.READ);
>   res = FSDirStatAndListingOp.getBlockLocations(
>   dir, pc, srcArg, offset, length, true);
>   if (isInSafeMode()) {
> for (LocatedBlock b : res.blocks.getLocatedBlocks()) {
>   // if safemode & no block locations yet then throw safemodeException
>   if ((b.getLocations() == null) || (b.getLocations().length == 0)) {
> SafeModeException se = newSafemodeException(
> "Zero blocklocations for " + srcArg);
> if (haEnabled && haContext != null &&
> haContext.getState().getServiceState() == 
> HAServiceState.ACTIVE) {
>   throw new RetriableException(se);
> } else {
>   throw se;
> }
>   }
> }
>   }
> {code}
> It only throws {{RetriableException}} for active NN so requests on observer 
> may just fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13927) TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA

2018-09-20 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622326#comment-16622326
 ] 

Ayush Saxena commented on HDFS-13927:
-

Thanx [~elgoiri] for the comment.
Uploaded the patch with the changes.

> TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA
> 
>
> Key: HDFS-13927
> URL: https://issues.apache.org/jira/browse/HDFS-13927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-13927-01.patch, HDFS-13927-02.patch
>
>
> Remove the explicit wait in the test for failed datanode with exact time 
> required for the process to confirm the status.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13927) TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA

2018-09-20 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13927:

Attachment: HDFS-13927-02.patch

> TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA
> 
>
> Key: HDFS-13927
> URL: https://issues.apache.org/jira/browse/HDFS-13927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-13927-01.patch, HDFS-13927-02.patch
>
>
> Remove the explicit wait in the test for failed datanode with exact time 
> required for the process to confirm the status.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13927) TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA

2018-09-20 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622316#comment-16622316
 ] 

Ayush Saxena commented on HDFS-13927:
-

Thanx [~ajayydv] for the comment.
 I guess you are talking about checking like dn.isDataNodeUp() .
 If we dig a little inside this method and see how it concludes that whether a 
datanode is up or not we will realize that it even ultimately goes and check 
whether the running state is Connecting (i.e DN is trying to connect to 
namenode) or Running (i.e DN is running properly) both the scenarios means that 
DN is Up and active.Here in our test scenario we intend to check that after 
cluster id mismatch the DN shouldn't get up that is shouldn't reach the running 
state rather after trying to connect due to Cluster ID mismatch it should 
FAIL.That only we checked that has the DN concluded it or not.

Adding the code segments for better reference.
{code:java}
 public boolean isDatanodeUp() {
for (BPOfferService bp : blockPoolManager.getAllNamenodeThreads()) {
  if (bp.isAlive()) {
return true;
  }
}
return false;
  }
{code}
Moving to bp.isAlive()
{code:java}
  boolean isAlive() {
for (BPServiceActor actor : bpServices) {
  if (actor.isAlive()) {
return true;
  }
}
return false;
  }
{code}
Checking Alive for all actor.
{code:java}
  boolean isAlive() {
if (!shouldServiceRun || !bpThread.isAlive()) {
  return false;
}
return runningState == BPServiceActor.RunningState.RUNNING
|| runningState == BPServiceActor.RunningState.CONNECTING;
  }
{code}
Here finally it checks the state only.Hope this would clarify the doubt. :)

> TestDataNodeMultipleRegistrations#testDNWithInvalidStorageWithHA
> 
>
> Key: HDFS-13927
> URL: https://issues.apache.org/jira/browse/HDFS-13927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-13927-01.patch
>
>
> Remove the explicit wait in the test for failed datanode with exact time 
> required for the process to confirm the status.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-09-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622315#comment-16622315
 ] 

Hadoop QA commented on HDDS-370:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
51s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-370 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940605/HDDS-370.04.patch |
| Optional Tests |  

  1   2   >