[jira] [Updated] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13042:
-
Affects Version/s: 3.0.0
   Labels: RBF  (was: )
 Target Version/s: 3.1.0, 2.9.1

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch, HDFS-13042.003.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13042:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.1
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.0, branch-2 and branch-2.9.

Thanks [~elgoiri] for contribution!

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch, HDFS-13042.003.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13064) Httpfs should return json instead of html when writting to a file without Content-Type

2018-01-24 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338865#comment-16338865
 ] 

Mukul Kumar Singh commented on HDFS-13064:
--

Thanks for working on this [~zhenyi]. The patch looks good to me.
Can you please also paste the output with the json as well ?



> Httpfs should return json instead of html when writting to a file without 
> Content-Type
> --
>
> Key: HDFS-13064
> URL: https://issues.apache.org/jira/browse/HDFS-13064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.0.0
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HDFS-13064.001.patch
>
>
> When I create a hdfs file, I get the following response.
>  
> {code:java}
> zdh102:~ # curl -i -X PUT 
> "http://10.43.183.103:14000/webhdfs/v1/2.txt?op=CREATE=hdfs=true;
> HTTP/1.1 400 Bad Request
> Server: Apache-Coyote/1.1
> Set-Cookie: 
> hadoop.auth="u=hdfs=hdfs=simple=1516901333684=wYqDlu/ovRxay9d6I6UmoH77KKI=";
>  Path=/; Expires= , 25- -2018 17:28:53 GMT; HttpOnly
> Content-Type: text/html;charset=utf-8
> Content-Language: en
> Content-Length: 1122
> Date: Thu, 25 Jan 2018 07:28:53 GMT
> Connection: close
> Apache Tomcat/7.0.82 - Error report 
> HTTP Status 400 - Data upload requests must have 
> content-type set to 'application/octet-stream' noshade="noshade">type Status reportmessage Data 
> upload requests must have content-type set to 
> 'application/octet-stream'description The request sent 
> by the client was syntactically incorrect. noshade="noshade">Apache Tomcat/7.0.82zdh102:~ # 
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13064) Httpfs should return json instead of html when writting to a file without Content-Type

2018-01-24 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HDFS-13064:
---
Status: Patch Available  (was: Open)

> Httpfs should return json instead of html when writting to a file without 
> Content-Type
> --
>
> Key: HDFS-13064
> URL: https://issues.apache.org/jira/browse/HDFS-13064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.0.0
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HDFS-13064.001.patch
>
>
> When I create a hdfs file, I get the following response.
>  
> {code:java}
> zdh102:~ # curl -i -X PUT 
> "http://10.43.183.103:14000/webhdfs/v1/2.txt?op=CREATE=hdfs=true;
> HTTP/1.1 400 Bad Request
> Server: Apache-Coyote/1.1
> Set-Cookie: 
> hadoop.auth="u=hdfs=hdfs=simple=1516901333684=wYqDlu/ovRxay9d6I6UmoH77KKI=";
>  Path=/; Expires= , 25- -2018 17:28:53 GMT; HttpOnly
> Content-Type: text/html;charset=utf-8
> Content-Language: en
> Content-Length: 1122
> Date: Thu, 25 Jan 2018 07:28:53 GMT
> Connection: close
> Apache Tomcat/7.0.82 - Error report 
> HTTP Status 400 - Data upload requests must have 
> content-type set to 'application/octet-stream' noshade="noshade">type Status reportmessage Data 
> upload requests must have content-type set to 
> 'application/octet-stream'description The request sent 
> by the client was syntactically incorrect. noshade="noshade">Apache Tomcat/7.0.82zdh102:~ # 
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13064) Httpfs should return json instead of html when writting to a file without Content-Type

2018-01-24 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HDFS-13064:
---
Attachment: HDFS-13064.001.patch

> Httpfs should return json instead of html when writting to a file without 
> Content-Type
> --
>
> Key: HDFS-13064
> URL: https://issues.apache.org/jira/browse/HDFS-13064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.0.0
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HDFS-13064.001.patch
>
>
> When I create a hdfs file, I get the following response.
>  
> {code:java}
> zdh102:~ # curl -i -X PUT 
> "http://10.43.183.103:14000/webhdfs/v1/2.txt?op=CREATE=hdfs=true;
> HTTP/1.1 400 Bad Request
> Server: Apache-Coyote/1.1
> Set-Cookie: 
> hadoop.auth="u=hdfs=hdfs=simple=1516901333684=wYqDlu/ovRxay9d6I6UmoH77KKI=";
>  Path=/; Expires= , 25- -2018 17:28:53 GMT; HttpOnly
> Content-Type: text/html;charset=utf-8
> Content-Language: en
> Content-Length: 1122
> Date: Thu, 25 Jan 2018 07:28:53 GMT
> Connection: close
> Apache Tomcat/7.0.82 - Error report 
> HTTP Status 400 - Data upload requests must have 
> content-type set to 'application/octet-stream' noshade="noshade">type Status reportmessage Data 
> upload requests must have content-type set to 
> 'application/octet-stream'description The request sent 
> by the client was syntactically incorrect. noshade="noshade">Apache Tomcat/7.0.82zdh102:~ # 
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13022) Block Storage: Kubernetes dynamic persistent volume provisioner

2018-01-24 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338856#comment-16338856
 ] 

Mukul Kumar Singh commented on HDFS-13022:
--

Thanks for working on this [~elek]. the patch looks really good to me. Some 
comments though. All the new files are missing the ASF license header.


1) DynamicProvisioner is missing the ASF license header.
2) DynamicProvisioner.java:50, there is a "" header should we have an ending 
"<\p>" somewhere as well
3) DynamicProvisioner.java:108. I feel that we should have a separate init 
function which should be called from CblockManager, the constructor does not 
need an init function.
4) DynamicProvisioner.java:193, nitpick: Can we have all the arguments in one 
line ?

5) DynamicProvisionerTest is missing the ASF license header.
6) Can we rename this file to "TestDynamicProvisioner" as all the other hadoop 
tests are named?
7) ozone-default.xml - dfs.cblock.kubernetes.dynamic-provisioner.enabled is 
repeated twice

> Block Storage: Kubernetes dynamic persistent volume provisioner
> ---
>
> Key: HDFS-13022
> URL: https://issues.apache.org/jira/browse/HDFS-13022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-13022-HDFS-7240.001.patch, 
> HDFS-13022-HDFS-7240.002.patch
>
>
> {color:#FF}{color}
> With HDFS-13017 and HDFS-13018 the cblock/jscsi server could be used in a 
> kubernetes cluster as the backend for iscsi persistent volumes.
> Unfortunatelly we need to create all the required cblocks manually with 'hdfs 
> cblok -c user volume...' for all the Persistent Volumes.
>  
> But it could be handled with a simple optional component. An additional 
> service could listen on the kubernetes event stream. In case of new 
> PersistentVolumeClaim (where the storageClassName is cblock) the cblock 
> server could create cblock in advance AND create the persistent volume could 
> be created.
>  
> The code is very simple, and this additional component could be optional in 
> the cblock server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13064) Httpfs should return json instead of html when writting to a file without Content-Type

2018-01-24 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HDFS-13064:
---
Description: 
When I create a hdfs file, I get the following response.

 
{code:java}
zdh102:~ # curl -i -X PUT 
"http://10.43.183.103:14000/webhdfs/v1/2.txt?op=CREATE=hdfs=true;
HTTP/1.1 400 Bad Request
Server: Apache-Coyote/1.1
Set-Cookie: 
hadoop.auth="u=hdfs=hdfs=simple=1516901333684=wYqDlu/ovRxay9d6I6UmoH77KKI=";
 Path=/; Expires= , 25- -2018 17:28:53 GMT; HttpOnly
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 1122
Date: Thu, 25 Jan 2018 07:28:53 GMT
Connection: close

Apache Tomcat/7.0.82 - Error report 
HTTP Status 400 - Data upload requests must have content-type 
set to 'application/octet-stream'type Status reportmessage Data 
upload requests must have content-type set to 
'application/octet-stream'description The request sent by 
the client was syntactically incorrect.Apache Tomcat/7.0.82zdh102:~ # 

{code}
 

 

> Httpfs should return json instead of html when writting to a file without 
> Content-Type
> --
>
> Key: HDFS-13064
> URL: https://issues.apache.org/jira/browse/HDFS-13064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.0.0
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
>
> When I create a hdfs file, I get the following response.
>  
> {code:java}
> zdh102:~ # curl -i -X PUT 
> "http://10.43.183.103:14000/webhdfs/v1/2.txt?op=CREATE=hdfs=true;
> HTTP/1.1 400 Bad Request
> Server: Apache-Coyote/1.1
> Set-Cookie: 
> hadoop.auth="u=hdfs=hdfs=simple=1516901333684=wYqDlu/ovRxay9d6I6UmoH77KKI=";
>  Path=/; Expires= , 25- -2018 17:28:53 GMT; HttpOnly
> Content-Type: text/html;charset=utf-8
> Content-Language: en
> Content-Length: 1122
> Date: Thu, 25 Jan 2018 07:28:53 GMT
> Connection: close
> Apache Tomcat/7.0.82 - Error report 
> HTTP Status 400 - Data upload requests must have 
> content-type set to 'application/octet-stream' noshade="noshade">type Status reportmessage Data 
> upload requests must have content-type set to 
> 'application/octet-stream'description The request sent 
> by the client was syntactically incorrect. noshade="noshade">Apache Tomcat/7.0.82zdh102:~ # 
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13064) Httpfs should return json instead of html when writting to a file without Content-Type

2018-01-24 Thread fang zhenyi (JIRA)
fang zhenyi created HDFS-13064:
--

 Summary: Httpfs should return json instead of html when writting 
to a file without Content-Type
 Key: HDFS-13064
 URL: https://issues.apache.org/jira/browse/HDFS-13064
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Affects Versions: 3.0.0
Reporter: fang zhenyi
Assignee: fang zhenyi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13018) Block Storage: make the iscsi target addres configurable for discovery

2018-01-24 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338838#comment-16338838
 ] 

Mukul Kumar Singh commented on HDFS-13018:
--

Thanks for the contribution [~elek], I have committed this to the feature 
branch.

> Block Storage: make the iscsi target addres configurable for discovery
> --
>
> Key: HDFS-13018
> URL: https://issues.apache.org/jira/browse/HDFS-13018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13018-HDFS-7240.001.patch, 
> HDFS-13018-HDFS-7240.002.patch
>
>
> Current jscsi server returns with the targetAddress (as ip) and 3260 (as 
> port) during the iscsi discovery. But in some cases we need to configure 
> these values.
> For example in kubernetes the iscsi server could run behind a service where 
> the address (where the jscsi server is available from the cluster) could be 
> different from the targetAddress where the server is listening.
> I propose to add two more configuration key to override the default 
> address/port for configuration but it also requires modification in the jscsi 
> project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13018) Block Storage: make the iscsi target addres configurable for discovery

2018-01-24 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-13018:
-
   Resolution: Fixed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

> Block Storage: make the iscsi target addres configurable for discovery
> --
>
> Key: HDFS-13018
> URL: https://issues.apache.org/jira/browse/HDFS-13018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13018-HDFS-7240.001.patch, 
> HDFS-13018-HDFS-7240.002.patch
>
>
> Current jscsi server returns with the targetAddress (as ip) and 3260 (as 
> port) during the iscsi discovery. But in some cases we need to configure 
> these values.
> For example in kubernetes the iscsi server could run behind a service where 
> the address (where the jscsi server is available from the cluster) could be 
> different from the targetAddress where the server is listening.
> I propose to add two more configuration key to override the default 
> address/port for configuration but it also requires modification in the jscsi 
> project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13050) [SPS] : Create start/stop script to start external SPS process.

2018-01-24 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-13050:
---
Priority: Blocker  (was: Major)

> [SPS] : Create start/stop script to start external SPS process.
> ---
>
> Key: HDFS-13050
> URL: https://issues.apache.org/jira/browse/HDFS-13050
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: HDFS-13050-HDFS-10285-01.patch
>
>
> As part of this Jira we will main class for SPS and modify the 
> \{{hadoop-daemon.sh}} to start external SPS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12995) [SPS] : Implement ExternalSPSContext for establishing RPC communication between SPS Service and NN

2018-01-24 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-12995:
---
Priority: Blocker  (was: Major)

> [SPS] : Implement ExternalSPSContext for establishing RPC communication 
> between SPS Service and NN
> --
>
> Key: HDFS-12995
> URL: https://issues.apache.org/jira/browse/HDFS-12995
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Anu Engineer
>Priority: Blocker
>
> This is the task for implementing the RPC based communication wrapper for SPS 
> Service to talk to NN when it require the information for processing.
> Let us say, that name of external context implementation is 
> ExternalSPSContext which should implement the APIs of Context interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13057) [SPS]: Revisit configurations to make SPS service modes internal/external/none

2018-01-24 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-13057:
---
Priority: Blocker  (was: Major)

> [SPS]: Revisit configurations to make SPS service modes internal/external/none
> --
>
> Key: HDFS-13057
> URL: https://issues.apache.org/jira/browse/HDFS-13057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Blocker
>
> This task is to revisit the configurations to make SPS service modes - 
> {{internal/external/none}}
> - {{internal}} : represents SPS service should be running with NN
> - {{external}}: represents SPS service will be running outside NN
> - {{none}}: represents the SPS service is completely disabled and zero cost 
> to the system.
> Proposed configuration {{dfs.storage.policy.satisfier.running.mode}} item in 
> hdfs-site.xml file and value will be string. The mode can be changed via 
> {{reconfig}} command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13054) Handling PathIsNotEmptyDirectoryException in DFSClient delete call

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338825#comment-16338825
 ] 

genericqa commented on HDFS-13054:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
103 unchanged - 0 fixed = 104 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13054 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907351/HDFS-13054.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0a2b404964e9 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-13048) LowRedundancyReplicatedBlocks metric can be negative

2018-01-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338824#comment-16338824
 ] 

Akira Ajisaka commented on HDFS-13048:
--

The test case shows how the metric becomes negative. Without the fix, if a 
block is added to LowRedundancyBlocks with corrupt_blocks priority and removed 
with low_redundancy priority, the counter for corrupt_block is not decremented.

> LowRedundancyReplicatedBlocks metric can be negative
> 
>
> Key: HDFS-13048
> URL: https://issues.apache.org/jira/browse/HDFS-13048
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13048-sample.patch, HDFS-13048.001.patch, 
> HDFS-13048.002.patch
>
>
> I'm seeing {{LowRedundancyReplicatedBlocks}} become negative. This should be 
> 0 or positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13063) Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338823#comment-16338823
 ] 

genericqa commented on HDFS-13063:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13063 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907634/HDFS-13063.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 74f759212315 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 59828be |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 340 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22807/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md
> 
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Trivial
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13063) Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338822#comment-16338822
 ] 

Hudson commented on HDFS-13063:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13555 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13555/])
HDFS-13063. Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md 
(aajisaka: rev eb2dd0868864da994612f1dd1452ea38d31f5e85)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md


> Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md
> 
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Trivial
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13048) LowRedundancyReplicatedBlocks metric can be negative

2018-01-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338819#comment-16338819
 ] 

Akira Ajisaka commented on HDFS-13048:
--

Thanks [~tasanuma0829] for reviewing this. Hi [~eddyxu] and [~andrew.wang], 
would you review this?

> LowRedundancyReplicatedBlocks metric can be negative
> 
>
> Key: HDFS-13048
> URL: https://issues.apache.org/jira/browse/HDFS-13048
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13048-sample.patch, HDFS-13048.001.patch, 
> HDFS-13048.002.patch
>
>
> I'm seeing {{LowRedundancyReplicatedBlocks}} become negative. This should be 
> 0 or positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12950) [oiv] ls will fail in secure cluster

2018-01-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338815#comment-16338815
 ] 

Akira Ajisaka commented on HDFS-12950:
--

I'm thinking we need to write a document.

> [oiv] ls will fail in  secure cluster
> -
>
> Key: HDFS-12950
> URL: https://issues.apache.org/jira/browse/HDFS-12950
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
>
> if we execute ls, it will throw following.
> {noformat}
> hdfs dfs -ls webhdfs://127.0.0.1:5978/
> ls: Invalid value for webhdfs parameter "op"
> {noformat}
> When client is configured with security (i.e "hadoop.security.authentication= 
> KERBEROS) , 
> then webhdfs will request getdelegation token which is not implemented and 
> hence it will  throw “ls: Invalid value for webhdfs parameter "op"”.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13062) Provide support for JN to use separate journal disk per namespace

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338808#comment-16338808
 ] 

genericqa commented on HDFS-13062:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 13 new + 17 unchanged - 0 fixed = 30 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Possible null pointer dereference of journalNodeDir in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.setConf(Configuration)  
Dereferenced at JournalNode.java:journalNodeDir in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.setConf(Configuration)  
Dereferenced at JournalNode.java:[line 175] |
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13062 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907616/HDFS-13062.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2f1af30d3d4a 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 

[jira] [Comment Edited] (HDFS-13063) Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338804#comment-16338804
 ] 

Akira Ajisaka edited comment on HDFS-13063 at 1/25/18 6:35 AM:
---

Committed this to trunk, branch-3.0, branch-2, and branch-2.9. Thanks 
[~jiangjianfei] for the contribution and thanks [~bharatviswa] for the review.


was (Author: ajisakaa):
Committed this to trunk. Thanks [~jiangjianfei] for the contribution and thanks 
[~bharatviswa] for the review.

> Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md
> 
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Trivial
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13063) Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338806#comment-16338806
 ] 

Akira Ajisaka commented on HDFS-13063:
--

Sorry I missed the result of Jenkins precommit job. This issue is to fix a typo 
in the document, so the precommit job will +1.

> Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md
> 
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Trivial
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13063) Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13063:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.1
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~jiangjianfei] for the contribution and thanks 
[~bharatviswa] for the review.

> Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md
> 
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Trivial
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13063) Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13063:
-
Summary: Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md  
(was: Fix the incorrect spelling in markdown file 
HDFSHighAvailabilityWithQJM.md)

> Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md
> 
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Trivial
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13063) Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13063:
-
   Priority: Trivial  (was: Minor)
Component/s: documentation
 Issue Type: Bug  (was: Improvement)

> Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md
> --
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Trivial
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13063) Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338799#comment-16338799
 ] 

Akira Ajisaka commented on HDFS-13063:
--

+1, checking this in.

> Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md
> --
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338774#comment-16338774
 ] 

genericqa commented on HDFS-13042:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13042 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907619/HDFS-13042.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 2561f679549f 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 59828be |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22805/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22805/testReport/ |
| Max. process+thread count | 4113 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Commented] (HDFS-13063) Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338769#comment-16338769
 ] 

Bharat Viswanadham commented on HDFS-13063:
---

+1.

 

> Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md
> --
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13063) Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338769#comment-16338769
 ] 

Bharat Viswanadham edited comment on HDFS-13063 at 1/25/18 6:01 AM:


+1

LGTM.

 


was (Author: bharatviswa):
+1.

 

> Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md
> --
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13063) Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-13063:
-
Status: Patch Available  (was: Open)

Fix the incorrect spelling, no new testcase needed.

> Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md
> --
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13063) Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-13063:
-
Attachment: HDFS-13063.001.patch

> Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md
> --
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13063) Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-13063:
-
Affects Version/s: 3.0.0

> Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md
> --
>
> Key: HDFS-13063
> URL: https://issues.apache.org/jira/browse/HDFS-13063
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
>Priority: Minor
> Attachments: HDFS-13063.001.patch
>
>
> The word JournalNode in this markdown file is incorrect spelled.
> In order for the Standby node to keep its state synchronized with the Active 
> node, both nodes communicate with a group of separate daemons called 
> "JournalNodes" (JNs). When any namespace modification is performed by the 
> Active node, it durably logs a record of the modification to a majority of 
> these JNs. The Standby node is capable of reading the edits from the JNs, and 
> is constantly watching them for changes to the edit log. As the Standby Node 
> sees the edits, it applies them to its own namespace. In the event of a 
> failover, the Standby will ensure that it has read all of the edits from the 
> {color:#33}*JounalNodes*{color} before promoting itself to the Active 
> state. This ensures that the namespace state is fully synchronized before a 
> failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13063) Fix the incorrect spelling in markdown file HDFSHighAvailabilityWithQJM.md

2018-01-24 Thread Jianfei Jiang (JIRA)
Jianfei Jiang created HDFS-13063:


 Summary: Fix the incorrect spelling in markdown file 
HDFSHighAvailabilityWithQJM.md
 Key: HDFS-13063
 URL: https://issues.apache.org/jira/browse/HDFS-13063
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jianfei Jiang
Assignee: Jianfei Jiang


The word JournalNode in this markdown file is incorrect spelled.

In order for the Standby node to keep its state synchronized with the Active 
node, both nodes communicate with a group of separate daemons called 
"JournalNodes" (JNs). When any namespace modification is performed by the 
Active node, it durably logs a record of the modification to a majority of 
these JNs. The Standby node is capable of reading the edits from the JNs, and 
is constantly watching them for changes to the edit log. As the Standby Node 
sees the edits, it applies them to its own namespace. In the event of a 
failover, the Standby will ensure that it has read all of the edits from the 
{color:#33}*JounalNodes*{color} before promoting itself to the Active 
state. This ensures that the namespace state is fully synchronized before a 
failover occurs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338694#comment-16338694
 ] 

Yiqun Lin edited comment on HDFS-13042 at 1/25/18 4:05 AM:
---

+1 pending Jenkins. I can help do the final commit,


was (Author: linyiqun):
+1 pending Jenkins. I can help do the final commit, would you just attach the 
complete patches in other JIRAs based on current change?

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch, HDFS-13042.003.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338694#comment-16338694
 ] 

Yiqun Lin edited comment on HDFS-13042 at 1/25/18 3:59 AM:
---

+1 pending Jenkins. I can help do the final commit, would you just attach the 
complete patches in other JIRAs based on current change?


was (Author: linyiqun):
+1 pending Jenkins.

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch, HDFS-13042.003.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338694#comment-16338694
 ] 

Yiqun Lin commented on HDFS-13042:
--

+1 pending Jenkins.

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch, HDFS-13042.003.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13042:
---
Attachment: HDFS-13042.003.patch

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch, HDFS-13042.003.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338686#comment-16338686
 ] 

Íñigo Goiri commented on HDFS-13042:


Thanks [~linyiqun] for the comments; addressed them in  [^HDFS-13042.003.patch].
For the other patches, right now they are just as reference as they will be 
more complete when this goes in.

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch, HDFS-13042.003.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13054) Handling PathIsNotEmptyDirectoryException in DFSClient delete call

2018-01-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338677#comment-16338677
 ] 

Arpit Agarwal commented on HDFS-13054:
--

Kicked off a new precommit build, given the large number of UT failures.

> Handling PathIsNotEmptyDirectoryException in DFSClient delete call
> --
>
> Key: HDFS-13054
> URL: https://issues.apache.org/jira/browse/HDFS-13054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13054.000.patch, HDFS-13054.001.patch, 
> HDFS-13054.002.patch
>
>
> In {{DFSClient#delete}} call, if we get 
> {{RemoteException(PathIsNotEmptyDirectoryException)}} we should unwrap and 
> throw {{PathIsNotEmptyDirectoryException}} to the caller.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13062) Provide support for JN to use separate journal disk per namespace

2018-01-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13062:
--
Status: Patch Available  (was: Open)

> Provide support for JN to use separate journal disk per namespace
> -
>
> Key: HDFS-13062
> URL: https://issues.apache.org/jira/browse/HDFS-13062
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13062.00.patch
>
>
> In Federated HA setup, provide support for separate journal disk for each 
> namespace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13062) Provide support for JN to use separate journal disk per namespace

2018-01-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13062:
--
Attachment: HDFS-13062.00.patch

> Provide support for JN to use separate journal disk per namespace
> -
>
> Key: HDFS-13062
> URL: https://issues.apache.org/jira/browse/HDFS-13062
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13062.00.patch
>
>
> In Federated HA setup, provide support for separate journal disk for each 
> namespace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13062) Provide support for JN to use separate journal disk per namespace

2018-01-24 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13062:
-

 Summary: Provide support for JN to use separate journal disk per 
namespace
 Key: HDFS-13062
 URL: https://issues.apache.org/jira/browse/HDFS-13062
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In Federated HA setup, provide support for separate journal disk for each 
namespace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338658#comment-16338658
 ] 

Yiqun Lin edited comment on HDFS-13042 at 1/25/18 3:09 AM:
---

[~elgoiri], thanks for addressing the comments. Looks very good now, just one 
comment:
{noformat}
  
+dfs.federation.router.heartbeat-state.interval
+5000
+
+  How often the Router should heartbeat its state into the State Store in 
milliseconds.
+
+  
{noformat}
Would you change default value to {{5s}}? Since this setting support time units 
suffixes, there will be a warning log generated when we passing a non-suffix 
value. In additional, we may document more that this setting supports multiple 
time unit suffixes as {{dfs.federation.router.quota-cache.update.interval}} 
described.

This comment also should be removed.
{noformat}
// Use same interval as the cache update service
{noformat}
+1 once addressed.

As I saw you have attached the follow-up patches, I will give my initial review 
today.


was (Author: linyiqun):
[~elgoiri], thanks for addressing the comments. Looks very good now, just one 
comment:
{noformat}
  
+dfs.federation.router.heartbeat-state.interval
+5000
+
+  How often the Router should heartbeat its state into the State Store in 
milliseconds.
+
+  
{noformat}
Would you change default value to {{5s}}? Since this setting support time units 
suffixes, there will be a warning log generated when we passing a non-suffix 
value. In additional, we may document more that this setting supports multiple 
time unit suffixes as {{dfs.federation.router.quota-cache.update.interval}} 
described. +1 once addressed.

As I saw you have attached the follow-up patches, I will give my initial review 
today.

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338658#comment-16338658
 ] 

Yiqun Lin commented on HDFS-13042:
--

[~elgoiri], thanks for addressing the comments. Looks very good now, just one 
comment:
{noformat}
  
+dfs.federation.router.heartbeat-state.interval
+5000
+
+  How often the Router should heartbeat its state into the State Store in 
milliseconds.
+
+  
{noformat}
Would you change default value to {{5s}}? Since this setting support time units 
suffixes, there will be a warning log generated when we passing a non-suffix 
value. In additional, we may document more that this setting supports multiple 
time unit suffixes as {{dfs.federation.router.quota-cache.update.interval}} 
described. +1 once addressed.

As I saw you have attached the follow-up patches, I will give my initial review 
today.

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338574#comment-16338574
 ] 

Allen Wittenauer commented on HDFS-13059:
-

bq. Is that our case here? I think so.

Some questions to think about:

a) How does one compare the pie chart from multiple clusters?

b) Given that there will be times when two or more of the slices of the pie 
charts will be nearly the same size, what does that tell the user?

c) When storage is added to a cluster, how is that reflected in the pie chart 
over time?

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338539#comment-16338539
 ] 

Íñigo Goiri commented on HDFS-13059:


[~aw], in the article you point, it mentions:
bq. The one single thing pie charts are good at is when you're comparing 2-3 
different data points with very different amounts of information. 

Is that our case here? I think so.

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338530#comment-16338530
 ] 

genericqa commented on HDFS-13058:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13058 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907577/HDFS-13058.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux de5fd275a22e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0c559b2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22800/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22800/testReport/ |
| Max. process+thread count | 2815 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22800/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |



[jira] [Commented] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338519#comment-16338519
 ] 

genericqa commented on HDFS-13058:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13058 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907574/HDFS-13058.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1c7e79981c6e 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fa8cf4d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22799/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22799/testReport/ |
| Max. process+thread count | 3568 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-01-24 Thread Dennis Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338488#comment-16338488
 ] 

Dennis Huo commented on HDFS-13056:
---

Cleaned up the POC code against branch-2.8 to sketch out the skeleton of 
handling different CrcTypes and being able to configure whether to use 
COMPOSITE_CRC via config keys; still needs unittests and refactoring and 
supporting some peripheral use cases like WebHDFS, but otherwise should be good 
to solicit some initial comments on the approach: 
[^HDFS-13056-branch-2.8.001.patch]

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.poc1.patch, hdfs-file-composite-crc32-v1.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-01-24 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Attachment: HDFS-13056-branch-2.8.001.patch

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.poc1.patch, hdfs-file-composite-crc32-v1.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338479#comment-16338479
 ] 

Allen Wittenauer commented on HDFS-13059:
-

As a counterpoint:

http://www.businessinsider.com/pie-charts-are-the-worst-2013-6



> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12051) Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory

2018-01-24 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338473#comment-16338473
 ] 

Manoj Govindassamy edited comment on HDFS-12051 at 1/25/18 12:23 AM:
-

Thanks for working on this [~mi...@cloudera.com]. Few comments on 
HDFS-12051.07.patch

{{NameCache.java}}
 * line 97: {{cache = new byte[cacheSize][];}} Since this will take up a 
contiguous space, we need to restrict the cache size to much lesser size than 
your current MAX size of 1 << 30. Your thoughts?
 * {{#cache}} is now following the {{open addressing}} model. Any reasons why 
you moved to this model compared to your initial design?
 * {{#put()}} 
 ** line 118: the first time cache fill .. shouldn't it be a new byte array 
name constructed from the passed in name? Why use the same caller passed in 
name?
 ** With the {{open addressing}} model, when you overwrite the cache slot with 
new names,  there could be INodes which are already referring to this name and 
are cut from the cache. Though their references are still valid, want  to 
understand why the preference given to new names compared to the old one.
 * I don't see any cache invalidation even when the INodes are removed. This 
takes up memory. Though not huge, design wise its not clean to leave the cache 
with stale values and incur cache lookup penalty in the future put() 
 * {{#getSize()}} since there is no cache invalidation, and since this open 
addressing model, the size returned is not right.
 * line 149: {{cacheSizeFor}} is this roundUp or roundDown to the nearest 2 
power. Please add the link to {{HashMap#tableSizeFor()}} in the comment to show 
where the code is inspired from.


was (Author: manojg):
Thanks for working on this [~mi...@cloudera.com]. Few comments on 
HDFS-12051.07.patch

{{NameCache.java}}
 * line 97: {{cache = new byte[cacheSize][];}} Since this will take up a 
contiguous space, we need to restrict the cache size to much lesser size than 
your current MAX size of 1 << 30. Your thoughts?
 * {{#cache}} is now following the {{open addressing}} model. Any reasons why 
you moved to this model compared to your initial design?
 * {{#put()}} 
 ** line 118: the first time cache fill .. shouldn't it be a new byte array 
name constructed from the passed in name? Why use the same caller passed in 
name?
 ** With the {{open addressing}} model, when you overwrite the cache slot with 
new names,  there could be INodes which are already referring to this name and 
are cut from the cache. 
 * I don't see any cache invalidation even when the INodes are removed. This 
takes up memory. Though not huge, design wise its not clean to leave the cache 
with stale values and incur cache lookup penalty in the future put() 
 * {{#getSize()}} since there is no cache invalidation, and since this open 
addressing model, the size returned is not right.
 * line 149: {{cacheSizeFor}} is this roundUp or roundDown to the nearest 2 
power. Please add the link to {{HashMap#tableSizeFor()}} in the comment to show 
where the code is inspired from.

> Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly 
> those denoting file/directory names) to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, 
> HDFS-12051.06.patch, HDFS-12051.07.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code:java}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 

[jira] [Commented] (HDFS-12947) Limit the number of Snapshots allowed to be created for a Snapshottable Directory

2018-01-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338475#comment-16338475
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12947:


In the 003 patch, the limit becomes per FileSystem but not per Snapshottable 
directory.  I think it should better to be per directory since we are be able 
to support many directories, each with a small number of snapshot.
- Per directory snapshot limit: a file system wide limit in order to protect 
the system.
- Per directory snapshot quota: for admin to manage snapshot usages.

> Limit the number of Snapshots allowed to be created for a Snapshottable 
> Directory
> -
>
> Key: HDFS-12947
> URL: https://issues.apache.org/jira/browse/HDFS-12947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-12947.001.patch, HDFS-12947.002.patch, 
> HDFS-12947.003.patch
>
>
> Currently, A snapshottable directory is able to accommodate 65,536 snapshots. 
> In case a directory has large no of snapshots , deletion of any of the 
> earlier snapshots take a lot of time which might lead to namenode crash 
> (HDFS-11225).
> This jira is introduced to limit the no of the snapshots under a 
> snapshottable directory to a reasonable value(say 10) which can be overriden.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12051) Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory

2018-01-24 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338473#comment-16338473
 ] 

Manoj Govindassamy commented on HDFS-12051:
---

Thanks for working on this [~mi...@cloudera.com]. Few comments on 
HDFS-12051.07.patch

{{NameCache.java}}
 * line 97: {{cache = new byte[cacheSize][];}} Since this will take up a 
contiguous space, we need to restrict the cache size to much lesser size than 
your current MAX size of 1 << 30. Your thoughts?
 * {{#cache}} is now following the {{open addressing}} model. Any reasons why 
you moved to this model compared to your initial design?
 * {{#put()}} 
 ** line 118: the first time cache fill .. shouldn't it be a new byte array 
name constructed from the passed in name? Why use the same caller passed in 
name?
 ** With the {{open addressing}} model, when you overwrite the cache slot with 
new names,  there could be INodes which are already referring to this name and 
are cut from the cache. 
 * I don't see any cache invalidation even when the INodes are removed. This 
takes up memory. Though not huge, design wise its not clean to leave the cache 
with stale values and incur cache lookup penalty in the future put() 
 * {{#getSize()}} since there is no cache invalidation, and since this open 
addressing model, the size returned is not right.
 * line 149: {{cacheSizeFor}} is this roundUp or roundDown to the nearest 2 
power. Please add the link to {{HashMap#tableSizeFor()}} in the comment to show 
where the code is inspired from.

> Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly 
> those denoting file/directory names) to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, 
> HDFS-12051.06.patch, HDFS-12051.07.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code:java}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 

[jira] [Issue Comment Deleted] (HDFS-12950) [oiv] ls will fail in secure cluster

2018-01-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12950:
---
Comment: was deleted

(was: Trying to chime in here.

I feel like implementing a dummy delegation token is the right approach. I 
guess this issue can be work-arounded by updating client's authentication to 
SIMPLE from KERBEROS.)

> [oiv] ls will fail in  secure cluster
> -
>
> Key: HDFS-12950
> URL: https://issues.apache.org/jira/browse/HDFS-12950
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
>
> if we execute ls, it will throw following.
> {noformat}
> hdfs dfs -ls webhdfs://127.0.0.1:5978/
> ls: Invalid value for webhdfs parameter "op"
> {noformat}
> When client is configured with security (i.e "hadoop.security.authentication= 
> KERBEROS) , 
> then webhdfs will request getdelegation token which is not implemented and 
> hence it will  throw “ls: Invalid value for webhdfs parameter "op"”.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13061) SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel

2018-01-24 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-13061:
-

 Summary: SaslDataTransferClient#checkTrustAndSend should not trust 
a partially trusted channel
 Key: HDFS-13061
 URL: https://issues.apache.org/jira/browse/HDFS-13061
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar


HDFS-5920 introduces encryption negotiation between client and server based on 
a customizable TrustedChannelResolver class. The TrustedChannelResolver is 
invoked on both client and server side. If the resolver indicates that the 
channel is trusted, then the data transfer will not be encrypted even if 
dfs.encrypt.data.transfer is set to true. 

SaslDataTransferClient#checkTrustAndSend ask the channel resolve whether the 
client and server address are trusted, respectively. It decides the channel is 
untrusted only if both client and server are not trusted to enforce encryption. 
*This ticket is opened to change it to not trust (and encrypt) if either client 
or server address are not trusted.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13060) Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver

2018-01-24 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-13060:
-

 Summary: Adding a BlacklistBasedTrustedChannelResolver for 
TrustedChannelResolver
 Key: HDFS-13060
 URL: https://issues.apache.org/jira/browse/HDFS-13060
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar


HDFS-5920 introduces encryption negotiation between client and server based on 
a customizable TrustedChannelResolver class. The TrustedChannelResolver is 
invoked on both client and server side. If the resolver indicates that the 
channel is trusted, then the data transfer will not be encrypted even if 
dfs.encrypt.data.transfer is set to true. 

The default trust channel resolver implementation returns false indicating that 
the channel is not trusted, which always enables encryption. HDFS-5920 also 
added a build-int whitelist based trust channel resolver. It allows you to put 
IP address/Network Mask of trusted client/server in whitelist files to skip 
encryption for certain traffics. 

This ticket is opened to add a blacklist based trust channel resolver for cases 
only certain machines (IPs) are untrusted without adding each trusted IP 
individually.
  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338432#comment-16338432
 ] 

Íñigo Goiri commented on HDFS-13059:


Thanks [~ajayydv], a couple comments on  [^HDFS-13059.002.patch]:
* * I would add a comment explaining what dataset looks like. This would 
basically be a reference on what the output for {{getDFSUsedData()}} looks like.
* There are a couple extra lines in {{dfshealth.html}} which I'm not sure are 
needed.
* It would be good to do the histogram part of {{dfs-visualizations.js}} but it 
doesn't have to be part of this JIRA.
* The indentation on {{hadoop.css}} is not very consistent.
* I would probably make the tooltip a little wider.

Other than this small comments, this is good and I think is useful.

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338420#comment-16338420
 ] 

genericqa commented on HDFS-13059:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-13059 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13059 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907591/HDFS-13059.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22802/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338408#comment-16338408
 ] 

Ajay Kumar commented on HDFS-13059:
---

[~elgoiri] Patch v2 moves {{renderPieChart}} to separate js file .

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13059:
--
Attachment: HDFS-13059.002.patch

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13059:
--
Attachment: Screen Shot 2018-01-24 at 3.04.17 PM.png

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, Screen Shot 2018-01-24 at 1.58.28 
> PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, Screen Shot 2018-01-24 at 
> 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338386#comment-16338386
 ] 

Ajay Kumar commented on HDFS-13059:
---

[~elgoiri], thanks for having a look. I like the idea, existing histogram code 
is in {{dfshealth.js}}, so added this one in same. Will update the patch.

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, Screen Shot 2018-01-24 at 1.58.28 
> PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12522) Ozone: Remove the Priority Queues used in the Container State Manager

2018-01-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338379#comment-16338379
 ] 

Xiaoyu Yao commented on HDFS-12522:
---

Thanks [~anu] for the update. The patch v2 looks pertty good to me. I just have 
two minor comments:

SCMException.java: NIT: Line 117 unncessary change.

BlockManagerImpl.java Line 270: the udpate of allocatedbytes looks good to me. 
I notice that we have a TODO in 421 to reclaim the deleted block space from 
container. Please ensure we have a open JIRA on this to fix this.

ContainerMapping.java Line 408, # of parameter does not match the # of {} in 
the LOG.error statement.

> Ozone: Remove the Priority Queues used in the Container State Manager
> -
>
> Key: HDFS-12522
> URL: https://issues.apache.org/jira/browse/HDFS-12522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDFS-12522-HDFS-7240.001.patch, 
> HDFS-12522-HDFS-7240.002.patch
>
>
> During code review of HDFS-12387, it was suggested that we remove the 
> priority queues that was used in ContainerStateManager. This JIRA tracks that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338347#comment-16338347
 ] 

Arpit Agarwal commented on HDFS-13058:
--

+1 for the v01 patch, pending jenkins.

> Fix dfs.namenode.shared.edits.dir in TestJournalNode
> 
>
> Key: HDFS-13058
> URL: https://issues.apache.org/jira/browse/HDFS-13058
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13058.00.patch, HDFS-13058.01.patch
>
>
> In TestJournalNode.java
> dfs.namenode.shared.edits.dir is set as below.
> conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
>  "qjournal://journalnode0:9900;journalnode1:9901");
>  
> From HDFS documentaion:
> [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]
> The URI should be of the form: 
> {{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 
>  
> Found this, when I was working for another jira, as when parsing this 
> dfs.namenode.shared.edits.dir property got an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13057) [SPS]: Revisit configurations to make SPS service modes internal/external/none

2018-01-24 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338343#comment-16338343
 ] 

Uma Maheswara Rao G commented on HDFS-13057:


[~rakeshr] Along with Jira could you please take care of one comment, i.e, we 
need to clear the spsPathIds when BM called disable.

> [SPS]: Revisit configurations to make SPS service modes internal/external/none
> --
>
> Key: HDFS-13057
> URL: https://issues.apache.org/jira/browse/HDFS-13057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
>
> This task is to revisit the configurations to make SPS service modes - 
> {{internal/external/none}}
> - {{internal}} : represents SPS service should be running with NN
> - {{external}}: represents SPS service will be running outside NN
> - {{none}}: represents the SPS service is completely disabled and zero cost 
> to the system.
> Proposed configuration {{dfs.storage.policy.satisfier.running.mode}} item in 
> hdfs-site.xml file and value will be string. The mode can be changed via 
> {{reconfig}} command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338341#comment-16338341
 ] 

Íñigo Goiri commented on HDFS-13042:


This is giving out of memory errors in the tests.
Not sure what the reasons are.

> RBF: Heartbeat Router State
> ---
>
> Key: HDFS-13042
> URL: https://issues.apache.org/jira/browse/HDFS-13042
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13042.000.patch, HDFS-13042.001.patch, 
> HDFS-13042.002.patch
>
>
> The Router should heartbeat its state to the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338336#comment-16338336
 ] 

Bharat Viswanadham commented on HDFS-13058:
---

Thanks [~arpitagarwal] for review.

Addressed the review comment.

> Fix dfs.namenode.shared.edits.dir in TestJournalNode
> 
>
> Key: HDFS-13058
> URL: https://issues.apache.org/jira/browse/HDFS-13058
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13058.00.patch, HDFS-13058.01.patch
>
>
> In TestJournalNode.java
> dfs.namenode.shared.edits.dir is set as below.
> conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
>  "qjournal://journalnode0:9900;journalnode1:9901");
>  
> From HDFS documentaion:
> [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]
> The URI should be of the form: 
> {{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 
>  
> Found this, when I was working for another jira, as when parsing this 
> dfs.namenode.shared.edits.dir property got an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13058:
--
Attachment: HDFS-13058.01.patch

> Fix dfs.namenode.shared.edits.dir in TestJournalNode
> 
>
> Key: HDFS-13058
> URL: https://issues.apache.org/jira/browse/HDFS-13058
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13058.00.patch, HDFS-13058.01.patch
>
>
> In TestJournalNode.java
> dfs.namenode.shared.edits.dir is set as below.
> conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
>  "qjournal://journalnode0:9900;journalnode1:9901");
>  
> From HDFS documentaion:
> [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]
> The URI should be of the form: 
> {{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 
>  
> Found this, when I was working for another jira, as when parsing this 
> dfs.namenode.shared.edits.dir property got an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338331#comment-16338331
 ] 

Íñigo Goiri commented on HDFS-13059:


This looks good.
Should we put the {{renderPieChart}} into a separate js and import it? It's a 
pretty good wrapper for d3,

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, Screen Shot 2018-01-24 at 1.58.28 
> PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338323#comment-16338323
 ] 

Arpit Agarwal commented on HDFS-13058:
--

Looks good, one minor comment for coding style:
{code}
+  "qjournal://journal0\\:9900;journal1:9901/"+journalId);
{code}

There should be spaces before and after the {{+}} operator.

+1 otherwise, pending Jenkins.

> Fix dfs.namenode.shared.edits.dir in TestJournalNode
> 
>
> Key: HDFS-13058
> URL: https://issues.apache.org/jira/browse/HDFS-13058
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13058.00.patch
>
>
> In TestJournalNode.java
> dfs.namenode.shared.edits.dir is set as below.
> conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
>  "qjournal://journalnode0:9900;journalnode1:9901");
>  
> From HDFS documentaion:
> [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]
> The URI should be of the form: 
> {{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 
>  
> Found this, when I was working for another jira, as when parsing this 
> dfs.namenode.shared.edits.dir property got an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13058:
--
Description: 
In TestJournalNode.java

dfs.namenode.shared.edits.dir is set as below.

conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
 "qjournal://journalnode0:9900;journalnode1:9901");

 

>From HDFS documentaion:

[https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]

The URI should be of the form: 
{{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 

 

Found this, when I was working for another jira, as when parsing this 
dfs.namenode.shared.edits.dir property got an exception.

  was:
In TestJournalNode.java

dfs.namenode.shared.edits.dir is set as below.

conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
 "qjournal://journalnode0:9900;journalnode1:9901");

 

>From HDFS documentaion:

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

The URI should be of the form: 
{{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 


> Fix dfs.namenode.shared.edits.dir in TestJournalNode
> 
>
> Key: HDFS-13058
> URL: https://issues.apache.org/jira/browse/HDFS-13058
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13058.00.patch
>
>
> In TestJournalNode.java
> dfs.namenode.shared.edits.dir is set as below.
> conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
>  "qjournal://journalnode0:9900;journalnode1:9901");
>  
> From HDFS documentaion:
> [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]
> The URI should be of the form: 
> {{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 
>  
> Found this, when I was working for another jira, as when parsing this 
> dfs.namenode.shared.edits.dir property got an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13058:
--
Status: Patch Available  (was: In Progress)

> Fix dfs.namenode.shared.edits.dir in TestJournalNode
> 
>
> Key: HDFS-13058
> URL: https://issues.apache.org/jira/browse/HDFS-13058
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13058.00.patch
>
>
> In TestJournalNode.java
> dfs.namenode.shared.edits.dir is set as below.
> conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
>  "qjournal://journalnode0:9900;journalnode1:9901");
>  
> From HDFS documentaion:
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The URI should be of the form: 
> {{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13058:
--
Attachment: HDFS-13058.00.patch

> Fix dfs.namenode.shared.edits.dir in TestJournalNode
> 
>
> Key: HDFS-13058
> URL: https://issues.apache.org/jira/browse/HDFS-13058
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13058.00.patch
>
>
> In TestJournalNode.java
> dfs.namenode.shared.edits.dir is set as below.
> conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
>  "qjournal://journalnode0:9900;journalnode1:9901");
>  
> From HDFS documentaion:
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The URI should be of the form: 
> {{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13058 started by Bharat Viswanadham.
-
> Fix dfs.namenode.shared.edits.dir in TestJournalNode
> 
>
> Key: HDFS-13058
> URL: https://issues.apache.org/jira/browse/HDFS-13058
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In TestJournalNode.java
> dfs.namenode.shared.edits.dir is set as below.
> conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
>  "qjournal://journalnode0:9900;journalnode1:9901");
>  
> From HDFS documentaion:
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The URI should be of the form: 
> {{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338304#comment-16338304
 ] 

Ajay Kumar commented on HDFS-13059:
---

Screenshots are from local setup where dfs used is negligible. 
In real cluster DFS used and remaining will occupy most of the pie chart.

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, Screen Shot 2018-01-24 at 1.58.28 
> PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13059:
--
Attachment: Screen Shot 2018-01-24 at 1.58.33 PM.png

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, Screen Shot 2018-01-24 at 1.58.28 
> PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13059:
--
Attachment: Screen Shot 2018-01-24 at 1.58.28 PM.png

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, Screen Shot 2018-01-24 at 1.58.28 
> PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13059:
--
Status: Patch Available  (was: Open)

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13059:
--
Attachment: HDFS-13059.001.patch

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13059:
-
Description: 
This jira proposes to add a pie chart in NN UI to show storage used by:
* DFS Used (Tooltip : "Storage currently used for DFS.")
* DFS available (Tooltip : "Storage available for DFS use.")
* Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
 " used by Non DFS storage.")
Tooltip will help users better understand what these terms mean.

  was:
This jira proposes to add a pie chart in NN UI to show storage used by:
* DFS Used (Tooltip : "Storage currently used for DFS.")
* DFS available (Tooltip : "Storage available for DFS use.")
* Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
 " used by Non DFS storage.")
Tooltip will help users better understand what these terms mean.

Credit: [~arpitagarwal],[~anu] for initial suggestion.


> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDFS-13059:
-

 Summary: Add pie chart in NN UI to show storage used
 Key: HDFS-13059
 URL: https://issues.apache.org/jira/browse/HDFS-13059
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ajay Kumar
Assignee: Ajay Kumar


This jira proposes to add a pie chart in NN UI to show storage used by:
* DFS Used (Tooltip : "Storage currently used for DFS.")
* DFS available (Tooltip : "Storage available for DFS use.")
* Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
 " used by Non DFS storage.")
Tooltip will help users better understand what these terms mean.

Credit: [~arpitagarwal],[~anu] for initial suggestion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13043) RBF: Expose the state of the Routers in the federation

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338253#comment-16338253
 ] 

genericqa commented on HDFS-13043:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.metrics.TestFederationMetrics |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13043 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907539/HDFS-13043.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2d11093261c2 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9760761 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22795/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22795/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22795/testReport/ |
| Max. 

[jira] [Commented] (HDFS-13042) RBF: Heartbeat Router State

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338242#comment-16338242
 ] 

genericqa commented on HDFS-13042:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.federation.router.TestRouterQuota |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13042 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907533/HDFS-13042.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 66a3fecee513 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9760761 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22793/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-13049) RBF: Inconsistent Router OPTS config in branch-2 and branch-3

2018-01-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338231#comment-16338231
 ] 

Íñigo Goiri commented on HDFS-13049:


 [^HDFS-13049.002.patch] and  [^HDFS-13049-branch-2.002.patch] LGTM including 
the Windows side.
+1

> RBF: Inconsistent Router OPTS config in branch-2 and branch-3
> -
>
> Key: HDFS-13049
> URL: https://issues.apache.org/jira/browse/HDFS-13049
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13049-branch-2.001.patch, 
> HDFS-13049-branch-2.002.patch, HDFS-13049.001.patch, HDFS-13049.002.patch
>
>
> For router's OPS config:
>  * In trunk/branch-3, the cmd looks for (command)_(subcommad)_OPTS --> 
> HDFS_DFSROUTER_OPTS / HADOOP_DFSROUTER_OPTS.
>  * In branch-2, the cmd looks for HADOOP_ROUTER_OPTS.
> Also, it would be better to add commented corresponding config in 
> hadoop-env.sh, for better user visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12051) Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory

2018-01-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338200#comment-16338200
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12051:


{quote}
Misha described his tests at:

https://issues.apache.org/jira/browse/HDFS-12051?focusedCommentId=16084471=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16084471

and

https://issues.apache.org/jira/browse/HDFS-12051?focusedCommentId=16329891=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16329891
{quote}
[~yzhangal], these two comments seem actually the same test result.  Do you 
agree?

This is a problem in this JIRA that it only seems having this one test result.  
Questions/comments:
- What is the data used in the test?
- Why not running tests with different data sets?
- Why no new results posted for the newer patches?
- No tests were run over FSImage loading?

> Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly 
> those denoting file/directory names) to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, 
> HDFS-12051.06.patch, HDFS-12051.07.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code:java}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- 

[jira] [Commented] (HDFS-13049) RBF: Inconsistent Router OPTS config in branch-2 and branch-3

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338186#comment-16338186
 ] 

genericqa commented on HDFS-13049:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 24m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 2s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
7s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | HDFS-13049 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907554/HDFS-13049-branch-2.002.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux bd536397e68f 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / c611707 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| shellcheck | v0.4.7 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22797/testReport/ |
| Max. process+thread count | 88 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22797/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Inconsistent Router OPTS config in branch-2 and branch-3
> -
>
> Key: HDFS-13049
> URL: https://issues.apache.org/jira/browse/HDFS-13049
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13049-branch-2.001.patch, 
> HDFS-13049-branch-2.002.patch, HDFS-13049.001.patch, HDFS-13049.002.patch
>
>
> For router's OPS config:
>  * In trunk/branch-3, the cmd looks for (command)_(subcommad)_OPTS --> 
> HDFS_DFSROUTER_OPTS / HADOOP_DFSROUTER_OPTS.
>  * In branch-2, the cmd looks for HADOOP_ROUTER_OPTS.
> Also, it would be better to add commented corresponding config in 
> hadoop-env.sh, for better user visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (HDFS-12838) Ozone: Optimize number of allocated block rpc by aggregating multiple block allocation requests

2018-01-24 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338178#comment-16338178
 ] 

Anu Engineer commented on HDFS-12838:
-

[~msingh] Thanks for investigating the cache. This patch looks quite good. I 
have only one comment.

* KeyManagerImpl.java:allocateBlocks*
{code}
 while (requestedSize > 0) {
  long allocateSize = Math.min(scmBlockSize, requestedSize);
  sizes.add(allocateSize);
  requestedSize -= allocateSize;
}
{code}

I am presuming that we are trying to do here it to convert
the requested size to blocks. For example, if the block size is 128M and user 
requested 500M of space, we will convert that into 500MB / 128MB = 4 blocks, 
yielding an allocation in SCM of 4 blocks.

If my assumption is true, then would it make sense to
rewrite the following API 
{{scmBlockClient.allocateBlock(sizes, type, factor, ksmId);}}
as
{{scmBlockClient.allocateBlock(blockSize, blockCount, type, factor, ksmId);}}
and now the original while loop can be replaced as 

{code}
int blockCount = (requestedSize % scmBlockSize) + 1;
scmBlockClient.allocateBlock(scmBlockSize, blockCount, type, factor, ksmId);
{code}

This does mean that client will have to overwrite the actual block length after 
the update is done.

This means that the signature in {{ScmBlockLocationProtocol.java}} would have 
to change too
and ScmBlockLocationProcotol.proto -- would change to 
{{uint64 size = 1;}} and {{uint32 count = 2;}}

* nit: ScmBlockLocationProtocolClientSideTranslatorPB, seems like we have an 
unused import.


> Ozone: Optimize number of allocated block rpc by aggregating multiple block 
> allocation requests
> ---
>
> Key: HDFS-12838
> URL: https://issues.apache.org/jira/browse/HDFS-12838
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-12838-HDFS-7240.001.patch, 
> HDFS-12838-HDFS-7240.002.patch, HDFS-12838-HDFS-7240.003.patch, 
> HDFS-12838-HDFS-7240.004.patch, HDFS-12838-HDFS-7240.005.patch, 
> HDFS-12838-HDFS-7240.006.patch
>
>
> Currently KeySpaceManager allocates multiple blocks by sending multiple block 
> allocation requests over the RPC. This can be optimized to aggregate multiple 
> block allocation request over one rpc.
> {code}
>   while (requestedSize > 0) {
> long allocateSize = Math.min(scmBlockSize, requestedSize);
> AllocatedBlock allocatedBlock =
> scmBlockClient.allocateBlock(allocateSize, type, factor);
> KsmKeyLocationInfo subKeyInfo = new KsmKeyLocationInfo.Builder()
> .setContainerName(allocatedBlock.getPipeline().getContainerName())
> .setBlockID(allocatedBlock.getKey())
> .setShouldCreateContainer(allocatedBlock.getCreateContainer())
> .setIndex(idx++)
> .setLength(allocateSize)
> .setOffset(0)
> .build();
> locations.add(subKeyInfo);
> requestedSize -= allocateSize;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13058) Fix dfs.namenode.shared.edits.dir in TestJournalNode

2018-01-24 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13058:
-

 Summary: Fix dfs.namenode.shared.edits.dir in TestJournalNode
 Key: HDFS-13058
 URL: https://issues.apache.org/jira/browse/HDFS-13058
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In TestJournalNode.java

dfs.namenode.shared.edits.dir is set as below.

conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY +".ns1" +".nn1",
 "qjournal://journalnode0:9900;journalnode1:9901");

 

>From HDFS documentaion:

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

The URI should be of the form: 
{{qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13049) RBF: Inconsistent Router OPTS config in branch-2 and branch-3

2018-01-24 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HDFS-13049:
---
Attachment: HDFS-13049-branch-2.002.patch

> RBF: Inconsistent Router OPTS config in branch-2 and branch-3
> -
>
> Key: HDFS-13049
> URL: https://issues.apache.org/jira/browse/HDFS-13049
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13049-branch-2.001.patch, 
> HDFS-13049-branch-2.002.patch, HDFS-13049.001.patch, HDFS-13049.002.patch
>
>
> For router's OPS config:
>  * In trunk/branch-3, the cmd looks for (command)_(subcommad)_OPTS --> 
> HDFS_DFSROUTER_OPTS / HADOOP_DFSROUTER_OPTS.
>  * In branch-2, the cmd looks for HADOOP_ROUTER_OPTS.
> Also, it would be better to add commented corresponding config in 
> hadoop-env.sh, for better user visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13049) RBF: Inconsistent Router OPTS config in branch-2 and branch-3

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338143#comment-16338143
 ] 

genericqa commented on HDFS-13049:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13049 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907531/HDFS-13049.001.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 68d9a9ad7aa8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9760761 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22796/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22796/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Inconsistent Router OPTS config in branch-2 and branch-3
> -
>
> Key: HDFS-13049
> URL: https://issues.apache.org/jira/browse/HDFS-13049
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13049-branch-2.001.patch, 
> HDFS-13049-branch-2.002.patch, HDFS-13049.001.patch, HDFS-13049.002.patch
>
>
> For router's OPS config:
>  * In trunk/branch-3, the cmd looks for (command)_(subcommad)_OPTS --> 
> HDFS_DFSROUTER_OPTS / HADOOP_DFSROUTER_OPTS.
>  * In branch-2, the cmd looks for HADOOP_ROUTER_OPTS.
> Also, it would be better to add commented corresponding config in 
> hadoop-env.sh, for better user visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13049) RBF: Inconsistent Router OPTS config in branch-2 and branch-3

2018-01-24 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338141#comment-16338141
 ] 

Wei Yan commented on HDFS-13049:


Update patches with Windows support. [~elgoiri] , could u help double-check the 
Windows part? I don't have a Windows env to fully test it.

> RBF: Inconsistent Router OPTS config in branch-2 and branch-3
> -
>
> Key: HDFS-13049
> URL: https://issues.apache.org/jira/browse/HDFS-13049
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13049-branch-2.001.patch, 
> HDFS-13049-branch-2.002.patch, HDFS-13049.001.patch, HDFS-13049.002.patch
>
>
> For router's OPS config:
>  * In trunk/branch-3, the cmd looks for (command)_(subcommad)_OPTS --> 
> HDFS_DFSROUTER_OPTS / HADOOP_DFSROUTER_OPTS.
>  * In branch-2, the cmd looks for HADOOP_ROUTER_OPTS.
> Also, it would be better to add commented corresponding config in 
> hadoop-env.sh, for better user visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13049) RBF: Inconsistent Router OPTS config in branch-2 and branch-3

2018-01-24 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HDFS-13049:
---
Attachment: HDFS-13049.002.patch

> RBF: Inconsistent Router OPTS config in branch-2 and branch-3
> -
>
> Key: HDFS-13049
> URL: https://issues.apache.org/jira/browse/HDFS-13049
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13049-branch-2.001.patch, 
> HDFS-13049-branch-2.002.patch, HDFS-13049.001.patch, HDFS-13049.002.patch
>
>
> For router's OPS config:
>  * In trunk/branch-3, the cmd looks for (command)_(subcommad)_OPTS --> 
> HDFS_DFSROUTER_OPTS / HADOOP_DFSROUTER_OPTS.
>  * In branch-2, the cmd looks for HADOOP_ROUTER_OPTS.
> Also, it would be better to add commented corresponding config in 
> hadoop-env.sh, for better user visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13055) Aggregate usage statistics from datanodes

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338139#comment-16338139
 ] 

genericqa commented on HDFS-13055:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-hdfs-project: The patch generated 54 new 
+ 1421 unchanged - 0 fixed = 1475 total (was 1421) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}191m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13055 |
| JIRA Patch URL | 

[jira] [Comment Edited] (HDFS-13050) [SPS] : Create start/stop script to start external SPS process.

2018-01-24 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338129#comment-16338129
 ] 

Uma Maheswara Rao G edited comment on HDFS-13050 at 1/24/18 7:47 PM:
-

One quick comment, do we really need to check dependent service running? 
Probably context implementation of isRunning simply can check sps service 
running or not, but need not worry about NN running ?

In external case, if NN is not running, context can not connect anyway, it can 
keep retry until they connects? That retry logic may goes into context 
implementations.


was (Author: umamaheswararao):
One quick comment, do we really need to check dependent service running? 
Probably context implementation of isRunning sumply can check sps service 
running or not, but need not worry about NN running ?

In external case, if NN is not running, context can not connect anyway, it can 
keep retry until they connects? That retry logic may goes into context 
implementations.

> [SPS] : Create start/stop script to start external SPS process.
> ---
>
> Key: HDFS-13050
> URL: https://issues.apache.org/jira/browse/HDFS-13050
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13050-HDFS-10285-01.patch
>
>
> As part of this Jira we will main class for SPS and modify the 
> \{{hadoop-daemon.sh}} to start external SPS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13050) [SPS] : Create start/stop script to start external SPS process.

2018-01-24 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338129#comment-16338129
 ] 

Uma Maheswara Rao G commented on HDFS-13050:


One quick comment, do we really need to check dependent service running? 
Probably context implementation of isRunning sumply can check sps service 
running or not, but need not worry about NN running ?

In external case, if NN is not running, context can not connect anyway, it can 
keep retry until they connects? That retry logic may goes into context 
implementations.

> [SPS] : Create start/stop script to start external SPS process.
> ---
>
> Key: HDFS-13050
> URL: https://issues.apache.org/jira/browse/HDFS-13050
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13050-HDFS-10285-01.patch
>
>
> As part of this Jira we will main class for SPS and modify the 
> \{{hadoop-daemon.sh}} to start external SPS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12522) Ozone: Remove the Priority Queues used in the Container State Manager

2018-01-24 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338119#comment-16338119
 ] 

Nanda kumar commented on HDFS-12522:


Thanks [~anu] for updating the patch. Looks pretty good to me.

ContainerStateMap.java Line:86 Extra “;”
{quote}That does not work. Since we don't know what values of info is being 
updated. If we want allocatedBytes to be corect, then it is has to updated at a 
level higher than this.
{quote}
{quote}Technically yes, but unfortunately, we use this to update all fields of 
the ContainerInfo. That is when state change is executed, the user can change 
other fields too.
{quote}
{{ContainerStateMap#updateState}} : As the name suggests, we should use it only 
for updating the state of the container. In that case, we can only update the 
state field and retain all the other values and also we can work just with 
containerID as the argument.
{quote}Line numbers are not correct?
{quote}
-443 - 444 This can be removed.- The line numbers were 427 - 429 (It is fixed 
in patch v2)

 

> Ozone: Remove the Priority Queues used in the Container State Manager
> -
>
> Key: HDFS-12522
> URL: https://issues.apache.org/jira/browse/HDFS-12522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDFS-12522-HDFS-7240.001.patch, 
> HDFS-12522-HDFS-7240.002.patch
>
>
> During code review of HDFS-12387, it was suggested that we remove the 
> priority queues that was used in ContainerStateManager. This JIRA tracks that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2018-01-24 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338117#comment-16338117
 ] 

Uma Maheswara Rao G commented on HDFS-10285:


Notes:

We had a community meeting on 22nd Jan about the SPS progress and merge plans:

Attendees: Chris, Vinay, Eddy, Rakesh, Surendra and Uma

Uma presented the progress, current status and overall pending tasks related to 
External SPS.

Chris: Can we go ahead merging the current code and continue work on external 
work?
Uma: certainly we could do that, but Anu is more interested to have basic 
external functionality.

Discussed: Current refactored sps using mostly standard apis which were exposed 
even via clientprotocol. Remaining few new apis can be put in NamenodeProtocol. 
This seems reasonable to all.
All agreed for the plan which was there in slides, that is with the blockers 
addressed tentatively we can propose merge vote by 27th Jan.

Current key blockers for merge: HDFS-12995, HDFS-13050, HDFS-13057

 

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-10285-consolidated-merge-patch-04.patch, 
> HDFS-SPS-TestReport-20170708.pdf, SPS Modularization.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13055) Aggregate usage statistics from datanodes

2018-01-24 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338111#comment-16338111
 ] 

Anu Engineer commented on HDFS-13055:
-

[~ajayydv] Thanks for the patch. It looks good overall. I have some high-level 
questions and some low-level comments. I will skip the low-level comments so 
that we can focus on the high-level questions first.

* The first and foremost, What issue are we trying to address here with this 
patch? 
* I am concerned that we are adding more load to HB processing, there are 
patches like HDFS-9239 which tried to address the issue of HB processing 
slowness. Aren't we adding more load to Namenode -- Memory/CPU costs to 
Namenode?
* All of this information can be retained in data node and pulled in only when 
needed, that is we can make this visible in Datanode JMX, and probably write a 
simple page on Namenode that has a Javascript that connects to data nodes 
directly and compute this info if needed. What is the rationale for aggregating 
this on Namenode all the time? Why a push and not a pull when we need this 
data? 



> Aggregate usage statistics from datanodes
> -
>
> Key: HDFS-13055
> URL: https://issues.apache.org/jira/browse/HDFS-13055
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13055.001.patch
>
>
> We collect variety of statistics in DataNodes and expose them via JMX. 
> Aggregating some of the high level statistics which we are already collecting 
> in {{DataNodeMetrics}} (like bytesRead,bytesWritten etc) over a configurable 
> time window will create a central repository accessible via JMX and UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13044) RBF: Add a safe mode for the Router

2018-01-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338101#comment-16338101
 ] 

genericqa commented on HDFS-13044:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 431 unchanged - 0 fixed = 432 total (was 431) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
23s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13044 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907536/HDFS-13004.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 26d740271461 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9760761 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22794/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22794/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22794/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

  1   2   >