[jira] [Updated] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-01-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11295:

Attachment: HDFS-11295.002.patch

> Check storage remaining instead of node remaining in 
> BlockPlacementPolicyDefault.chooseReplicaToDelete()
> 
>
> Key: HDFS-11295
> URL: https://issues.apache.org/jira/browse/HDFS-11295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Xiao Liang
>Assignee: Elek, Marton
> Attachments: HDFS-11295.001.patch, HDFS-11295.002.patch
>
>
> Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic 
> for choosing replica to delete is to pick the node with the least free 
> space(node.getRemaining()), if all hearbeats are within the tolerable 
> heartbeat interval.
> However, a node may have multiple storages and node.getRemaining() is a sum 
> of the remainings of them, if free space of the storage with the block to be 
> delete is low, free space of the node could still be high due to other 
> storages of the node, finally the storage chosen may not be the storage with 
> least free space.
> So using storage.getRemaining() to choose a storage with least free space for 
> choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI

2017-01-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11265:

Target Version/s: 3.0.0-alpha3  (was: 3.0.0-alpha2)
 Description: 
With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode 
page in NameNode UI, but they are lacking icon visualization like the ones 
shown for other node states. Need to extend the icon visualization to cover 
Maintenance Mode.

{code}

[jira] [Commented] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-01-25 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838195#comment-15838195
 ] 

Elek, Marton commented on HDFS-11295:
-

I am not sure what was the problem:
{code}
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.671 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.081 sec - 
in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.439 sec - in 
org.apache.hadoop.fs.TestUnbuffer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 24.75 sec - in 
org.apache.hadoop.fs.TestEnhancedByteBufferAccess

Results :

Tests run: 4967, Failures: 0, Errors: 0, Skipped: 48

[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:32:00.004s
[INFO] Finished at: Sun Jan 22 22:26:18 UTC 2017
[INFO] Final Memory: 29M/243M
[INFO] 
[WARNING] The requested profile "native" could not be activated because it does 
not exist.
[WARNING] The requested profile "yarn-ui" could not be activated because it 
does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
{code}

I will upload the same patch again to trigger a new jenkins build.

> Check storage remaining instead of node remaining in 
> BlockPlacementPolicyDefault.chooseReplicaToDelete()
> 
>
> Key: HDFS-11295
> URL: https://issues.apache.org/jira/browse/HDFS-11295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Xiao Liang
>Assignee: Elek, Marton
> Attachments: HDFS-11295.001.patch, HDFS-11295.002.patch
>
>
> Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic 
> for choosing replica to delete is to pick the node with the least free 
> space(node.getRemaining()), if all hearbeats are within the tolerable 
> heartbeat interval.
> However, a node may have multiple storages and node.getRemaining() is a sum 
> of the remainings of them, if free space of the storage with the block to be 
> delete is low, free space of the node could still be high due to other 
> storages of the node, finally the storage chosen may not be the storage with 
> least free space.
> So using storage.getRemaining() to choose a storage with least free space for 
> choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI

2017-01-25 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838190#comment-15838190
 ] 

Elek, Marton commented on HDFS-11265:
-

Ok. But as I understood, the "decomissioned" state is when the datanode is 
still running but it's prepared to the shutting down and no other block will be 
saved to the node. When it's turned off it's "Decomissioned and dead". I 
uploaded two other possible icon (from the Glyphicons Halflings which is used 
in the frontend). One is a x in a circle which is a little bit more neutral, 
other one just an exclamation mark.

> Extend visualization for Maintenance Mode under Datanode tab in the NameNode 
> UI
> ---
>
> Key: HDFS-11265
> URL: https://issues.apache.org/jira/browse/HDFS-11265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Elek, Marton
> Attachments: HDFS-11265.001.patch, icons.png
>
>
> With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode 
> page in NameNode UI, but they are lacking icon visualization like the ones 
> shown for other node states. Need to extend the icon visualization to cover 
> Maintenance Mode.
> {code}
> 

[jira] [Updated] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI

2017-01-25 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11265:

Attachment: ex.png
x.png

> Extend visualization for Maintenance Mode under Datanode tab in the NameNode 
> UI
> ---
>
> Key: HDFS-11265
> URL: https://issues.apache.org/jira/browse/HDFS-11265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Elek, Marton
> Attachments: ex.png, HDFS-11265.001.patch, icons.png, x.png
>
>
> With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode 
> page in NameNode UI, but they are lacking icon visualization like the ones 
> shown for other node states. Need to extend the icon visualization to cover 
> Maintenance Mode.
> {code}
> 

[jira] [Updated] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI

2017-01-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11265:

Attachment: icons.png

> Extend visualization for Maintenance Mode under Datanode tab in the NameNode 
> UI
> ---
>
> Key: HDFS-11265
> URL: https://issues.apache.org/jira/browse/HDFS-11265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Elek, Marton
> Attachments: HDFS-11265.001.patch, icons.png
>
>
> With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode 
> page in NameNode UI, but they are lacking icon visualization like the ones 
> shown for other node states. Need to extend the icon visualization to cover 
> Maintenance Mode.
> {code}
> 

[jira] [Assigned] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI

2017-01-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDFS-11265:
---

Assignee: Elek, Marton

> Extend visualization for Maintenance Mode under Datanode tab in the NameNode 
> UI
> ---
>
> Key: HDFS-11265
> URL: https://issues.apache.org/jira/browse/HDFS-11265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Elek, Marton
> Attachments: HDFS-11265.001.patch, icons.png
>
>
> With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode 
> page in NameNode UI, but they are lacking icon visualization like the ones 
> shown for other node states. Need to extend the icon visualization to cover 
> Maintenance Mode.
> {code}
> 

[jira] [Updated] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI

2017-01-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11265:

Target Version/s: 3.0.0-alpha2
  Status: Patch Available  (was: Open)

> Extend visualization for Maintenance Mode under Datanode tab in the NameNode 
> UI
> ---
>
> Key: HDFS-11265
> URL: https://issues.apache.org/jira/browse/HDFS-11265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Elek, Marton
> Attachments: HDFS-11265.001.patch, icons.png
>
>
> With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode 
> page in NameNode UI, but they are lacking icon visualization like the ones 
> shown for other node states. Need to extend the icon visualization to cover 
> Maintenance Mode.
> {code}
> 

[jira] [Updated] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-01-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11295:

Target Version/s: 3.0.0-alpha3  (was: 2.7.1)
  Status: Patch Available  (was: Open)

> Check storage remaining instead of node remaining in 
> BlockPlacementPolicyDefault.chooseReplicaToDelete()
> 
>
> Key: HDFS-11295
> URL: https://issues.apache.org/jira/browse/HDFS-11295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Xiao Liang
>Assignee: Elek, Marton
>
> Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic 
> for choosing replica to delete is to pick the node with the least free 
> space(node.getRemaining()), if all hearbeats are within the tolerable 
> heartbeat interval.
> However, a node may have multiple storages and node.getRemaining() is a sum 
> of the remainings of them, if free space of the storage with the block to be 
> delete is low, free space of the node could still be high due to other 
> storages of the node, finally the storage chosen may not be the storage with 
> least free space.
> So using storage.getRemaining() to choose a storage with least free space for 
> choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-01-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11295:

Attachment: HDFS-11295.001.patch

> Check storage remaining instead of node remaining in 
> BlockPlacementPolicyDefault.chooseReplicaToDelete()
> 
>
> Key: HDFS-11295
> URL: https://issues.apache.org/jira/browse/HDFS-11295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Xiao Liang
>Assignee: Elek, Marton
> Attachments: HDFS-11295.001.patch
>
>
> Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic 
> for choosing replica to delete is to pick the node with the least free 
> space(node.getRemaining()), if all hearbeats are within the tolerable 
> heartbeat interval.
> However, a node may have multiple storages and node.getRemaining() is a sum 
> of the remainings of them, if free space of the storage with the block to be 
> delete is low, free space of the node could still be high due to other 
> storages of the node, finally the storage chosen may not be the storage with 
> least free space.
> So using storage.getRemaining() to choose a storage with least free space for 
> choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI

2017-01-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11265:

Attachment: HDFS-11265.001.patch

I suggest to use the wrench for the maintenance and ban-circle for the 
decomissioned node. 

> Extend visualization for Maintenance Mode under Datanode tab in the NameNode 
> UI
> ---
>
> Key: HDFS-11265
> URL: https://issues.apache.org/jira/browse/HDFS-11265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
> Attachments: HDFS-11265.001.patch
>
>
> With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode 
> page in NameNode UI, but they are lacking icon visualization like the ones 
> shown for other node states. Need to extend the icon visualization to cover 
> Maintenance Mode.
> {code}
> 

[jira] [Updated] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-02-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11295:

Attachment: HDFS-11295.jpg

> Check storage remaining instead of node remaining in 
> BlockPlacementPolicyDefault.chooseReplicaToDelete()
> 
>
> Key: HDFS-11295
> URL: https://issues.apache.org/jira/browse/HDFS-11295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Xiao Liang
>Assignee: Elek, Marton
> Attachments: HDFS-11295.001.patch, HDFS-11295.002.patch, 
> HDFS-11295.003.patch, HDFS-11295.jpg
>
>
> Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic 
> for choosing replica to delete is to pick the node with the least free 
> space(node.getRemaining()), if all hearbeats are within the tolerable 
> heartbeat interval.
> However, a node may have multiple storages and node.getRemaining() is a sum 
> of the remainings of them, if free space of the storage with the block to be 
> delete is low, free space of the node could still be high due to other 
> storages of the node, finally the storage chosen may not be the storage with 
> least free space.
> So using storage.getRemaining() to choose a storage with least free space for 
> choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-02-20 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874873#comment-15874873
 ] 

Elek, Marton commented on HDFS-11295:
-

According to the suggestion of [~arpiagariu] I created a visual explanation 
(see the attached image) about this modification.

This is the exact situation what the patched unit test uses:

Without the patch the node2 would be used as the _summary_ of the free spaces 
is the lowest there (2 GB). But there is a storage (storage5) where the free 
space is only 0.5 gb. So after the patch the storage 5 will be used: even if 
the overall free space on node4 is 100.5 Gb the storage5 has only 0.5 gb so it 
should be preferred to delete unnecessary blocks from there.

> Check storage remaining instead of node remaining in 
> BlockPlacementPolicyDefault.chooseReplicaToDelete()
> 
>
> Key: HDFS-11295
> URL: https://issues.apache.org/jira/browse/HDFS-11295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Xiao Liang
>Assignee: Elek, Marton
> Attachments: HDFS-11295.001.patch, HDFS-11295.002.patch, 
> HDFS-11295.003.patch, HDFS-11295.jpg
>
>
> Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic 
> for choosing replica to delete is to pick the node with the least free 
> space(node.getRemaining()), if all hearbeats are within the tolerable 
> heartbeat interval.
> However, a node may have multiple storages and node.getRemaining() is a sum 
> of the remainings of them, if free space of the storage with the block to be 
> delete is low, free space of the node could still be high due to other 
> storages of the node, finally the storage chosen may not be the storage with 
> least free space.
> So using storage.getRemaining() to choose a storage with least free space for 
> choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-02-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11295:

Attachment: HDFS-11295.004.patch

Thanks, the review. Improved according to the comments.

> Check storage remaining instead of node remaining in 
> BlockPlacementPolicyDefault.chooseReplicaToDelete()
> 
>
> Key: HDFS-11295
> URL: https://issues.apache.org/jira/browse/HDFS-11295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Xiao Liang
>Assignee: Elek, Marton
> Attachments: HDFS-11295.001.patch, HDFS-11295.002.patch, 
> HDFS-11295.003.patch, HDFS-11295.004.patch, HDFS-11295.jpg
>
>
> Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic 
> for choosing replica to delete is to pick the node with the least free 
> space(node.getRemaining()), if all hearbeats are within the tolerable 
> heartbeat interval.
> However, a node may have multiple storages and node.getRemaining() is a sum 
> of the remainings of them, if free space of the storage with the block to be 
> delete is low, free space of the node could still be high due to other 
> storages of the node, finally the storage chosen may not be the storage with 
> least free space.
> So using storage.getRemaining() to choose a storage with least free space for 
> choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-01-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDFS-11295:
---

Assignee: Elek, Marton

> Check storage remaining instead of node remaining in 
> BlockPlacementPolicyDefault.chooseReplicaToDelete()
> 
>
> Key: HDFS-11295
> URL: https://issues.apache.org/jira/browse/HDFS-11295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Xiao Liang
>Assignee: Elek, Marton
>
> Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic 
> for choosing replica to delete is to pick the node with the least free 
> space(node.getRemaining()), if all hearbeats are within the tolerable 
> heartbeat interval.
> However, a node may have multiple storages and node.getRemaining() is a sum 
> of the remainings of them, if free space of the storage with the block to be 
> delete is low, free space of the node could still be high due to other 
> storages of the node, finally the storage chosen may not be the storage with 
> least free space.
> So using storage.getRemaining() to choose a storage with least free space for 
> choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12197) Do the HDFS dist stitching in hadoop-hdfs-project

2017-07-31 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107840#comment-16107840
 ] 

Elek, Marton commented on HDFS-12197:
-

It's not just about running the pseudo distributed cluster from dev tree. It's 
also impossible to run Namenode from IDE while the scope is provided for the 
selected dependencies. Would be great to fix this as well.

> Do the HDFS dist stitching in hadoop-hdfs-project
> -
>
> Key: HDFS-12197
> URL: https://issues.apache.org/jira/browse/HDFS-12197
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>
> Problem reported by [~lars_francke] on HDFS-11596. We can no longer easily 
> start a namenode and datanode from the source directory without doing a full 
> build per the wiki instructions: 
> https://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment
> This is because we don't have a top-level dist for HDFS. $HADOOP_YARN_HOME 
> for instance can be set to {{hadoop-yarn-project/target}}, but 
> $HADOOP_HDFS_HOME goes into the submodule: 
> {{hadoop-hdfs-project/hadoop-hdfs/target}}. This means it's missing the files 
> from the sibling hadoop-hdfs-client module (which is required by the 
> namenode), but also other siblings like nfs and httpfs.
> So, I think the right fix is doing the dist stitching at the 
> {{hadoop-hdfs-project}} level where we can aggregate all the child modules, 
> and pointing $HADOOP_HDFS_HOME at this directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-31 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107825#comment-16107825
 ] 

Elek, Marton commented on HDFS-12034:
-

Uploaded the new patch.

I read again the Apache Policy, and I believe this is the right method:

{code}
In LICENSE, add a pointer to the dependency's license within the distribution 
and a short note summarizing its licensing:

This product bundles SuperWidget 1.2.3, which is available under a
"3-clause BSD" license.  For details, see deps/superwidget/.

Under normal circumstances, there is no need to modify NOTICE.

NOTE: It's also possible to include the text of the 3rd party license within 
the LICENSE file. This is best reserved for short licenses. It's important to 
specify the version of the dependency as licenses are sometimes changed.
{code}

--> They are short, so I included them in the LICENSE, together with the 
copyright lines.

{code}
 However, elements such as the copyright notifications embedded within BSD and 
MIT licenses need not be duplicated in NOTICE -- it suffices to leave those 
notices in their original locations
{code}

--> No need to add to the NOTICE as they are included in the LICENSE.txt.

The exact angular version (an exect version of every dependency) is included as 
the full path is added to the LICENSE.txt

Thanks [~aw] for the remarks. 

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch, 
> HDFS-12034-HDFS-7240.002.patch, HDFS-12034-HDFS-7240.003.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-31 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107808#comment-16107808
 ] 

Elek, Marton commented on HDFS-12034:
-

I also created a new patch.

As I undertand we don't need to add the copyright lines to the NOTICE, if it's 
included in the LICENSE.txt

So I added the original LICENSE contents from nvd3/angular-nvd3/nvd3 to the 
LICENSE.txt together with the copyright lines.

For nvd3 we don't need to add the bottom part which is the copy d3 licence. 
(It's not part of the nvd3 and already included).

The d3 LICENCE has aready been included together with the copyright line at 
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/d3-LICENSE 

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch, 
> HDFS-12034-HDFS-7240.002.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12034) Ozone: Web interface for KSM

2017-07-31 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12034:

Attachment: HDFS-12034-HDFS-7240.003.patch

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch, 
> HDFS-12034-HDFS-7240.002.patch, HDFS-12034-HDFS-7240.003.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-30 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106377#comment-16106377
 ] 

Elek, Marton commented on HDFS-12034:
-

I mean there was no NOTICE.txt in the external sources (like angular, 
angular-route, nvd3 etc.). If I unterstood the process well in that case I 
don't need to update the NOTICE in the hadoop  source tree

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch, 
> HDFS-12034-HDFS-7240.002.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12005) Ozone: Web interface for SCM

2017-08-11 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12005:

Status: Patch Available  (was: In Progress)

> Ozone: Web interface for SCM
> 
>
> Key: HDFS-12005
> URL: https://issues.apache.org/jira/browse/HDFS-12005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12005-HDFS-7240.001.patch
>
>
> This is a propsal about how a web interface could be implemented for SCM (and 
> later for KSM) similar to the namenode ui.
> 1. JS framework
> There are three big option here. 
> A.) One is to use a full featured web framework with all the webpack/npm 
> minify/uglify magic. Build time the webpack/npm scripts should be run and the 
> result will be added to the jar file. 
> B.) It could be simplified if the generated minified/uglified js files are 
> added to the project on commit time. It requires an additional step for every 
> new patch (to generate the new minified javascripts) but doesn't require 
> additional JS build tools during the build.
> C.) The third option is to make it as simple as possible similar to the 
> current namenode ui which uses javascript but every dependency is commited 
> (without JS minify/uglify and other preprocessing).
> I prefer to the third one as:
>  * I have seen a lot of problems during frequent builds od older tez-ui 
> versions (bower version mismatch, npm version mismatch, npm transitive 
> dependency problems, proxy problem with older versions). All they could be 
> fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
> build step the hdfs projects build could be kept more simple.
>  * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
> doesn't require more sophisticated model. (Eg. we don't need JS require as we 
> need only a few controllers)
>  * HDFS developers mostly backend developers and not JS developers
> 2. Frameworks 
> The big advantages of a more modern JS framework is the simplified 
> programming model (for example with two way databinding) I suggest to use a 
> more modern framework (not just jquery) which supports plain js (not just 
> ECMA2015/2016/typescript) and just include the required js files in the 
> projects (similar to the included bootstrap or as the existing namenode ui 
> works). 
>  
>   * React could be a good candidate but it requires more library as it's just 
> a ui framework, even the REST calls need separated library. It could be used 
> with plain javascript instead of JSX and classes but not straightforward, and 
> it's more verbose.
>  
>   * Ember is used in yarnui2 but the main strength of the ember is the CLI 
> which couldn't be used for the simplified approach easily. I think ember is 
> best with the A.) option
>   * Angular 1 is a good candidate (but not so fancy). In case of angular 1 
> the component based approach should be used (in that case later it could be 
> easier to migrate to angular 2 or react)
>   * The mainstream side of Angular 2 uses typescript, it could work with 
> plain JS but it could require additional knowledge, most of the tutorials and 
> documentation shows the typescript approach.
> I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
> need to emulate JSX with function calls, simple HTML templates could be used.
> 3. Backend
> I would prefer the approach of the existing namenode ui where the backend is 
> just the jmx endpoint. To keep it as simple as possible I suggest to try to 
> avoid dedicated REST backend if possible. Later we can use REST api of 
> SCM/KSM if they will be implemented. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-08-09 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120436#comment-16120436
 ] 

Elek, Marton commented on HDFS-7240:


Is this chat for commiters only?

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12005) Ozone: Web interface for SCM

2017-08-11 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12005:

Attachment: HDFS-12005-HDFS-7240.001.patch

> Ozone: Web interface for SCM
> 
>
> Key: HDFS-12005
> URL: https://issues.apache.org/jira/browse/HDFS-12005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12005-HDFS-7240.001.patch
>
>
> This is a propsal about how a web interface could be implemented for SCM (and 
> later for KSM) similar to the namenode ui.
> 1. JS framework
> There are three big option here. 
> A.) One is to use a full featured web framework with all the webpack/npm 
> minify/uglify magic. Build time the webpack/npm scripts should be run and the 
> result will be added to the jar file. 
> B.) It could be simplified if the generated minified/uglified js files are 
> added to the project on commit time. It requires an additional step for every 
> new patch (to generate the new minified javascripts) but doesn't require 
> additional JS build tools during the build.
> C.) The third option is to make it as simple as possible similar to the 
> current namenode ui which uses javascript but every dependency is commited 
> (without JS minify/uglify and other preprocessing).
> I prefer to the third one as:
>  * I have seen a lot of problems during frequent builds od older tez-ui 
> versions (bower version mismatch, npm version mismatch, npm transitive 
> dependency problems, proxy problem with older versions). All they could be 
> fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
> build step the hdfs projects build could be kept more simple.
>  * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
> doesn't require more sophisticated model. (Eg. we don't need JS require as we 
> need only a few controllers)
>  * HDFS developers mostly backend developers and not JS developers
> 2. Frameworks 
> The big advantages of a more modern JS framework is the simplified 
> programming model (for example with two way databinding) I suggest to use a 
> more modern framework (not just jquery) which supports plain js (not just 
> ECMA2015/2016/typescript) and just include the required js files in the 
> projects (similar to the included bootstrap or as the existing namenode ui 
> works). 
>  
>   * React could be a good candidate but it requires more library as it's just 
> a ui framework, even the REST calls need separated library. It could be used 
> with plain javascript instead of JSX and classes but not straightforward, and 
> it's more verbose.
>  
>   * Ember is used in yarnui2 but the main strength of the ember is the CLI 
> which couldn't be used for the simplified approach easily. I think ember is 
> best with the A.) option
>   * Angular 1 is a good candidate (but not so fancy). In case of angular 1 
> the component based approach should be used (in that case later it could be 
> easier to migrate to angular 2 or react)
>   * The mainstream side of Angular 2 uses typescript, it could work with 
> plain JS but it could require additional knowledge, most of the tutorials and 
> documentation shows the typescript approach.
> I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
> need to emulate JSX with function calls, simple HTML templates could be used.
> 3. Backend
> I would prefer the approach of the existing namenode ui where the backend is 
> just the jmx endpoint. To keep it as simple as possible I suggest to try to 
> avoid dedicated REST backend if possible. Later we can use REST api of 
> SCM/KSM if they will be implemented. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12286) Ozone: Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-11 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12286:

Attachment: jmx2.png
jmx1.png

Two JMX bean example

> Ozone: Extend MBeans utility to add any key value pairs to the registered 
> MXBeans
> -
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12286-HDFS-7240.001.patch, jmx1.png, jmx2.png
>
>
> The MBeans class in hadoop-common helps to register MXBean to the platform 
> jmx bean. Unfortunatelly it supports only Name and Service keys even if the 
> JMX specification allows any key value pairs to use as a part of the 
> ObjectName.
> This patch adds the possibility to define more key/value pairs for the JMX 
> ObjectName.
> It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
> jmx properties. But to use a common html component to display them we need a 
> possibility to get the JMX bean of SCM server and KSM server with one query.
> This will be possible with adding additional (common) key/value property to 
> the ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12005) Ozone: Web interface for SCM

2017-08-11 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123353#comment-16123353
 ] 

Elek, Marton commented on HDFS-12005:
-

First version is uploaded but as It depends on HDFS-12286, I wil not set the 
"Patch available" status until the other on is not merged.

Summary of the changes:

1. The common (both scm/ksm)  part (display rpc, display build info, uptime) 
has been moved to a common js/html. (So the KSM part is slightly modified)

2. For SCM the defaults existing information is printed out: Chill mode 
properties, available nodes per status...). 

3. I added a new JMX interface for the BlockManager to print out the the number 
of the openContainers.

4. The previously introduced ServiceRuntime interface has been renamed as it 
was confusing for the JMX system when the parent interface does also follow the 
*MXBean naming convention.

5. Metrics system is initialized for SCM.

6.  Style/UX fixes (multiple rpc reports are separated with tabs, no external 
page links for the tools as it's less confusing if it opens in the same window, 
etc).

7. Additional small bugfixes what I noticed during the test. Some of them broke 
the visibility of the KSM data. Should work after this patch.

Tested with corona and the {{hdfs oz}} command.

> Ozone: Web interface for SCM
> 
>
> Key: HDFS-12005
> URL: https://issues.apache.org/jira/browse/HDFS-12005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12005-HDFS-7240.001.patch
>
>
> This is a propsal about how a web interface could be implemented for SCM (and 
> later for KSM) similar to the namenode ui.
> 1. JS framework
> There are three big option here. 
> A.) One is to use a full featured web framework with all the webpack/npm 
> minify/uglify magic. Build time the webpack/npm scripts should be run and the 
> result will be added to the jar file. 
> B.) It could be simplified if the generated minified/uglified js files are 
> added to the project on commit time. It requires an additional step for every 
> new patch (to generate the new minified javascripts) but doesn't require 
> additional JS build tools during the build.
> C.) The third option is to make it as simple as possible similar to the 
> current namenode ui which uses javascript but every dependency is commited 
> (without JS minify/uglify and other preprocessing).
> I prefer to the third one as:
>  * I have seen a lot of problems during frequent builds od older tez-ui 
> versions (bower version mismatch, npm version mismatch, npm transitive 
> dependency problems, proxy problem with older versions). All they could be 
> fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
> build step the hdfs projects build could be kept more simple.
>  * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
> doesn't require more sophisticated model. (Eg. we don't need JS require as we 
> need only a few controllers)
>  * HDFS developers mostly backend developers and not JS developers
> 2. Frameworks 
> The big advantages of a more modern JS framework is the simplified 
> programming model (for example with two way databinding) I suggest to use a 
> more modern framework (not just jquery) which supports plain js (not just 
> ECMA2015/2016/typescript) and just include the required js files in the 
> projects (similar to the included bootstrap or as the existing namenode ui 
> works). 
>  
>   * React could be a good candidate but it requires more library as it's just 
> a ui framework, even the REST calls need separated library. It could be used 
> with plain javascript instead of JSX and classes but not straightforward, and 
> it's more verbose.
>  
>   * Ember is used in yarnui2 but the main strength of the ember is the CLI 
> which couldn't be used for the simplified approach easily. I think ember is 
> best with the A.) option
>   * Angular 1 is a good candidate (but not so fancy). In case of angular 1 
> the component based approach should be used (in that case later it could be 
> easier to migrate to angular 2 or react)
>   * The mainstream side of Angular 2 uses typescript, it could work with 
> plain JS but it could require additional knowledge, most of the tutorials and 
> documentation shows the typescript approach.
> I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
> need to emulate JSX with function calls, simple HTML templates could be used.
> 3. Backend
> I would prefer the approach of the existing namenode ui where the backend is 
> just the jmx endpoint. To keep it as simple as possible I suggest to try to 
> avoid dedicated REST backend if possible. Later we can use REST api of 

[jira] [Created] (HDFS-12286) Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12286:
---

 Summary: Extend MBeans utility to add any key value pairs to the 
registered MXBeans
 Key: HDFS-12286
 URL: https://issues.apache.org/jira/browse/HDFS-12286
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: HDFS-7240


The MBeans class in hadoop-common helps to register MXBean to the platform jmx 
bean. Unfortunatelly it supports only Name and Service keys even if the JMX 
specification allows any key value pairs to use as a part of the ObjectName.

This patch adds the possibility to define more key/value pairs for the JMX 
ObjectName.

It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
jmx properties. But to use a common html component to display them we need a 
possibility to get the JMX bean of SCM server and KSM server with one query.

This will be possible with adding additional (common) key/value property to the 
ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12286) Ozone: Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-11 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123183#comment-16123183
 ] 

Elek, Marton edited comment on HDFS-12286 at 8/11/17 11:07 AM:
---

Yes. 

I uploaded two screenshots. One is the result of the /jmx http call from the 
browser the other one shows how it looks like from jconsole.

Ant it could be filtered like this:
{code}
/jmx?qry=Hadoop:service=*,name=*,component=ServerRuntime"
{code}

If the ServerRuntime tag exists both on KSM and SCM, I can read the same jmx 
properties (build version, uptime) without changing the URL in the component, 
even if the primary name of the JMX bean is different.

Actually the metrics system already uses additional properties. I have a bean 
with the key {{"Hadoop:service=KeySpaceM…,name=...MetricsSystem,sub=Stats"}}


was (Author: elek):
Two JMX bean example

> Ozone: Extend MBeans utility to add any key value pairs to the registered 
> MXBeans
> -
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12286-HDFS-7240.001.patch, jmx1.png, jmx2.png
>
>
> The MBeans class in hadoop-common helps to register MXBean to the platform 
> jmx bean. Unfortunatelly it supports only Name and Service keys even if the 
> JMX specification allows any key value pairs to use as a part of the 
> ObjectName.
> This patch adds the possibility to define more key/value pairs for the JMX 
> ObjectName.
> It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
> jmx properties. But to use a common html component to display them we need a 
> possibility to get the JMX bean of SCM server and KSM server with one query.
> This will be possible with adding additional (common) key/value property to 
> the ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12005) Ozone: Web interface for SCM

2017-08-11 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124079#comment-16124079
 ] 

Elek, Marton commented on HDFS-12005:
-

TestSCMMXBean (and a few typo) is fixed. The other tests don't seem to be 
related...

> Ozone: Web interface for SCM
> 
>
> Key: HDFS-12005
> URL: https://issues.apache.org/jira/browse/HDFS-12005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12005-HDFS-7240.001.patch, 
> HDFS-12005-HDFS-7240.002.patch
>
>
> This is a propsal about how a web interface could be implemented for SCM (and 
> later for KSM) similar to the namenode ui.
> 1. JS framework
> There are three big option here. 
> A.) One is to use a full featured web framework with all the webpack/npm 
> minify/uglify magic. Build time the webpack/npm scripts should be run and the 
> result will be added to the jar file. 
> B.) It could be simplified if the generated minified/uglified js files are 
> added to the project on commit time. It requires an additional step for every 
> new patch (to generate the new minified javascripts) but doesn't require 
> additional JS build tools during the build.
> C.) The third option is to make it as simple as possible similar to the 
> current namenode ui which uses javascript but every dependency is commited 
> (without JS minify/uglify and other preprocessing).
> I prefer to the third one as:
>  * I have seen a lot of problems during frequent builds od older tez-ui 
> versions (bower version mismatch, npm version mismatch, npm transitive 
> dependency problems, proxy problem with older versions). All they could be 
> fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
> build step the hdfs projects build could be kept more simple.
>  * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
> doesn't require more sophisticated model. (Eg. we don't need JS require as we 
> need only a few controllers)
>  * HDFS developers mostly backend developers and not JS developers
> 2. Frameworks 
> The big advantages of a more modern JS framework is the simplified 
> programming model (for example with two way databinding) I suggest to use a 
> more modern framework (not just jquery) which supports plain js (not just 
> ECMA2015/2016/typescript) and just include the required js files in the 
> projects (similar to the included bootstrap or as the existing namenode ui 
> works). 
>  
>   * React could be a good candidate but it requires more library as it's just 
> a ui framework, even the REST calls need separated library. It could be used 
> with plain javascript instead of JSX and classes but not straightforward, and 
> it's more verbose.
>  
>   * Ember is used in yarnui2 but the main strength of the ember is the CLI 
> which couldn't be used for the simplified approach easily. I think ember is 
> best with the A.) option
>   * Angular 1 is a good candidate (but not so fancy). In case of angular 1 
> the component based approach should be used (in that case later it could be 
> easier to migrate to angular 2 or react)
>   * The mainstream side of Angular 2 uses typescript, it could work with 
> plain JS but it could require additional knowledge, most of the tutorials and 
> documentation shows the typescript approach.
> I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
> need to emulate JSX with function calls, simple HTML templates could be used.
> 3. Backend
> I would prefer the approach of the existing namenode ui where the backend is 
> just the jmx endpoint. To keep it as simple as possible I suggest to try to 
> avoid dedicated REST backend if possible. Later we can use REST api of 
> SCM/KSM if they will be implemented. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12005) Ozone: Web interface for SCM

2017-08-11 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12005:

Attachment: HDFS-12005-HDFS-7240.002.patch

> Ozone: Web interface for SCM
> 
>
> Key: HDFS-12005
> URL: https://issues.apache.org/jira/browse/HDFS-12005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12005-HDFS-7240.001.patch, 
> HDFS-12005-HDFS-7240.002.patch
>
>
> This is a propsal about how a web interface could be implemented for SCM (and 
> later for KSM) similar to the namenode ui.
> 1. JS framework
> There are three big option here. 
> A.) One is to use a full featured web framework with all the webpack/npm 
> minify/uglify magic. Build time the webpack/npm scripts should be run and the 
> result will be added to the jar file. 
> B.) It could be simplified if the generated minified/uglified js files are 
> added to the project on commit time. It requires an additional step for every 
> new patch (to generate the new minified javascripts) but doesn't require 
> additional JS build tools during the build.
> C.) The third option is to make it as simple as possible similar to the 
> current namenode ui which uses javascript but every dependency is commited 
> (without JS minify/uglify and other preprocessing).
> I prefer to the third one as:
>  * I have seen a lot of problems during frequent builds od older tez-ui 
> versions (bower version mismatch, npm version mismatch, npm transitive 
> dependency problems, proxy problem with older versions). All they could be 
> fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
> build step the hdfs projects build could be kept more simple.
>  * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
> doesn't require more sophisticated model. (Eg. we don't need JS require as we 
> need only a few controllers)
>  * HDFS developers mostly backend developers and not JS developers
> 2. Frameworks 
> The big advantages of a more modern JS framework is the simplified 
> programming model (for example with two way databinding) I suggest to use a 
> more modern framework (not just jquery) which supports plain js (not just 
> ECMA2015/2016/typescript) and just include the required js files in the 
> projects (similar to the included bootstrap or as the existing namenode ui 
> works). 
>  
>   * React could be a good candidate but it requires more library as it's just 
> a ui framework, even the REST calls need separated library. It could be used 
> with plain javascript instead of JSX and classes but not straightforward, and 
> it's more verbose.
>  
>   * Ember is used in yarnui2 but the main strength of the ember is the CLI 
> which couldn't be used for the simplified approach easily. I think ember is 
> best with the A.) option
>   * Angular 1 is a good candidate (but not so fancy). In case of angular 1 
> the component based approach should be used (in that case later it could be 
> easier to migrate to angular 2 or react)
>   * The mainstream side of Angular 2 uses typescript, it could work with 
> plain JS but it could require additional knowledge, most of the tutorials and 
> documentation shows the typescript approach.
> I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
> need to emulate JSX with function calls, simple HTML templates could be used.
> 3. Backend
> I would prefer the approach of the existing namenode ui where the backend is 
> just the jmx endpoint. To keep it as simple as possible I suggest to try to 
> avoid dedicated REST backend if possible. Later we can use REST api of 
> SCM/KSM if they will be implemented. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12286) Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12286:

Status: Patch Available  (was: Open)

> Extend MBeans utility to add any key value pairs to the registered MXBeans
> --
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12286-HDFS-7240.001.patch
>
>
> The MBeans class in hadoop-common helps to register MXBean to the platform 
> jmx bean. Unfortunatelly it supports only Name and Service keys even if the 
> JMX specification allows any key value pairs to use as a part of the 
> ObjectName.
> This patch adds the possibility to define more key/value pairs for the JMX 
> ObjectName.
> It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
> jmx properties. But to use a common html component to display them we need a 
> possibility to get the JMX bean of SCM server and KSM server with one query.
> This will be possible with adding additional (common) key/value property to 
> the ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12286) Extend MBeans utility to add any key value pairs to the registered MXBeans

2017-08-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12286:

Attachment: HDFS-12286-HDFS-7240.001.patch

> Extend MBeans utility to add any key value pairs to the registered MXBeans
> --
>
> Key: HDFS-12286
> URL: https://issues.apache.org/jira/browse/HDFS-12286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12286-HDFS-7240.001.patch
>
>
> The MBeans class in hadoop-common helps to register MXBean to the platform 
> jmx bean. Unfortunatelly it supports only Name and Service keys even if the 
> JMX specification allows any key value pairs to use as a part of the 
> ObjectName.
> This patch adds the possibility to define more key/value pairs for the JMX 
> ObjectName.
> It will be usefull for the SCM/KSM web page. Both SCM/KSM Server have common 
> jmx properties. But to use a common html component to display them we need a 
> possibility to get the JMX bean of SCM server and KSM server with one query.
> This will be possible with adding additional (common) key/value property to 
> the ObjectName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12034) Ozone: Web interface for KSM

2017-07-25 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12034:

Status: Patch Available  (was: Open)

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12034) Ozone: Web interface for KSM

2017-07-25 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12034:

Attachment: HDFS-12034-HDFS-7240.001.patch

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-25 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100034#comment-16100034
 ] 

Elek, Marton commented on HDFS-12034:
-

Hi I submitted the first patch. It doesn't contain many additional detailed 
metrics just the existing ones (hadoop rpc/KSMMetrics) and a few basic ones 
which are new (uptime, build version). If the percentiles are enabled, it will 
show graphs about the more detailed rpc metrics. (The required config keys are 
shown on the web page).

It's not a small patch (even if the half of it just external dependencies) and 
it's the first working/usable version of the web interface, so I suggest to 
merge it, and add more detailed metrics with separated issues (as they most 
probably requires bigger code changes).

I added angular/d3/nvd3/angular-nvd3 libraries to the project, but all the 
licences are compatible (MIT, BSD 3 claues, Apache, MIT, in the same order)

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-28 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104636#comment-16104636
 ] 

Elek, Marton commented on HDFS-12034:
-

FTR:

Angular (+route), Angular nvd3 is under MIT. I added the new files to the 
existing MIT section of the LICENCE. 
d3: 3-clause BSD, we have already used d3, but it's a different version as 
angular nvd3 requires this one. LICENCE is the same, the path is added to the 
LICENCE.
nvd3: Apache licence

I didn't find any NOTICE in the source repos, so I just changed the LICENCE. 
Will be uploaded soon after the investigation of the build errors.

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-28 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104675#comment-16104675
 ] 

Elek, Marton commented on HDFS-12034:
-

The previous build is a little bit suspicious:

It contains a commit from trunk. From the console:

{code}
HEAD is now at 38c6fa5 HADOOP-11875. [JDK9] Adding a second copy of Hamlet 
without _ as a one-character identifier.
{code}

I improve the patch and try it again...

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12034) Ozone: Web interface for KSM

2017-07-28 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12034:

Attachment: HDFS-12034-HDFS-7240.002.patch

> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch, 
> HDFS-12034-HDFS-7240.002.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12266) Ozone : add debug cli to hdfs script

2017-08-09 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119696#comment-16119696
 ] 

Elek, Marton commented on HDFS-12266:
-

I think it is a duplicate of HDFS-11836 and one of them could be reverted. But 
maybe I missed something.

> Ozone : add debug cli to hdfs script
> 
>
> Key: HDFS-12266
> URL: https://issues.apache.org/jira/browse/HDFS-12266
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12266-HDFS-7240.001.patch
>
>
> The debug CLI (which converts metadata levelDB/RocksDB file to sqlite file) 
> is still missing in hdfs script, this JIRA adds it as one of the hdfs 
> subcommands. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12266) Ozone : add debug cli to hdfs script

2017-08-09 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119721#comment-16119721
 ] 

Elek, Marton edited comment on HDFS-12266 at 8/9/17 10:59 AM:
--

Oh, I found it, there was a typo in the commit of HDFS-11836 (commnad):

{code}
hadoop_add_subcommnad "sqlconvert" "convert ozone leveldb files into sqlite db 
file for debug purpose"
{code}

So {{sqlconvert}} was not visible from the command line.

I suggest to revert the older commit: HDFS-11836


was (Author: elek):
Oh, I found it, there was a typo in the commit of HDFS-11836:

{code}
hadoop_add_subcommnad "sqlconvert" "convert ozone leveldb files into sqlite db 
file for debug purpose"
{code}

So {{sqlconvert}} was not visible from the command line.

I suggest to revert the older commit: HDFS-11836

> Ozone : add debug cli to hdfs script
> 
>
> Key: HDFS-12266
> URL: https://issues.apache.org/jira/browse/HDFS-12266
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12266-HDFS-7240.001.patch
>
>
> The debug CLI (which converts metadata levelDB/RocksDB file to sqlite file) 
> is still missing in hdfs script, this JIRA adds it as one of the hdfs 
> subcommands. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12266) Ozone : add debug cli to hdfs script

2017-08-09 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119721#comment-16119721
 ] 

Elek, Marton commented on HDFS-12266:
-

Oh, I found it, there was a typo in the commit of HDFS-11836:

{code}
hadoop_add_subcommnad "sqlconvert" "convert ozone leveldb files into sqlite db 
file for debug purpose"
{code}

So {{sqlconvert}} was not visible from the command line.

I suggest to revert the older commit: HDFS-11836

> Ozone : add debug cli to hdfs script
> 
>
> Key: HDFS-12266
> URL: https://issues.apache.org/jira/browse/HDFS-12266
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12266-HDFS-7240.001.patch
>
>
> The debug CLI (which converts metadata levelDB/RocksDB file to sqlite file) 
> is still missing in hdfs script, this JIRA adds it as one of the hdfs 
> subcommands. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2017-08-09 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120502#comment-16120502
 ] 

Elek, Marton commented on HDFS-7240:


Yeah, but it seems for the registration I need an {{@apache.org}} email 
address. And no information about invites anywhere.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file

2017-08-07 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116490#comment-16116490
 ] 

Elek, Marton commented on HDFS-12162:
-

LGTM. I just tested HDFS-12139 patch, and REST call worked as the patch wrote 
it.

> Update listStatus document to describe the behavior when the argument is a 
> file
> ---
>
> Key: HDFS-12162
> URL: https://issues.apache.org/jira/browse/HDFS-12162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, httpfs
>Reporter: Yongjun Zhang
>Assignee: Ajay Yadav
> Attachments: HDFS-12162.01.patch, Screen Shot 2017-08-03 at 11.01.46 
> AM.png, Screen Shot 2017-08-03 at 11.02.19 AM.png
>
>
> The listStatus method can take in either directory path or file path as 
> input, however, currently both the javadoc and external document describe it 
> as only taking directory as input. This jira is to update the document about 
> the behavior when the argument is a file path.
> Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this 
> jira is the result of our discussion there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12005) Ozone: Web interface for SCM

2017-08-07 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12005 started by Elek, Marton.
---
> Ozone: Web interface for SCM
> 
>
> Key: HDFS-12005
> URL: https://issues.apache.org/jira/browse/HDFS-12005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> This is a propsal about how a web interface could be implemented for SCM (and 
> later for KSM) similar to the namenode ui.
> 1. JS framework
> There are three big option here. 
> A.) One is to use a full featured web framework with all the webpack/npm 
> minify/uglify magic. Build time the webpack/npm scripts should be run and the 
> result will be added to the jar file. 
> B.) It could be simplified if the generated minified/uglified js files are 
> added to the project on commit time. It requires an additional step for every 
> new patch (to generate the new minified javascripts) but doesn't require 
> additional JS build tools during the build.
> C.) The third option is to make it as simple as possible similar to the 
> current namenode ui which uses javascript but every dependency is commited 
> (without JS minify/uglify and other preprocessing).
> I prefer to the third one as:
>  * I have seen a lot of problems during frequent builds od older tez-ui 
> versions (bower version mismatch, npm version mismatch, npm transitive 
> dependency problems, proxy problem with older versions). All they could be 
> fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
> build step the hdfs projects build could be kept more simple.
>  * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
> doesn't require more sophisticated model. (Eg. we don't need JS require as we 
> need only a few controllers)
>  * HDFS developers mostly backend developers and not JS developers
> 2. Frameworks 
> The big advantages of a more modern JS framework is the simplified 
> programming model (for example with two way databinding) I suggest to use a 
> more modern framework (not just jquery) which supports plain js (not just 
> ECMA2015/2016/typescript) and just include the required js files in the 
> projects (similar to the included bootstrap or as the existing namenode ui 
> works). 
>  
>   * React could be a good candidate but it requires more library as it's just 
> a ui framework, even the REST calls need separated library. It could be used 
> with plain javascript instead of JSX and classes but not straightforward, and 
> it's more verbose.
>  
>   * Ember is used in yarnui2 but the main strength of the ember is the CLI 
> which couldn't be used for the simplified approach easily. I think ember is 
> best with the A.) option
>   * Angular 1 is a good candidate (but not so fancy). In case of angular 1 
> the component based approach should be used (in that case later it could be 
> easier to migrate to angular 2 or react)
>   * The mainstream side of Angular 2 uses typescript, it could work with 
> plain JS but it could require additional knowledge, most of the tutorials and 
> documentation shows the typescript approach.
> I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
> need to emulate JSX with function calls, simple HTML templates could be used.
> 3. Backend
> I would prefer the approach of the existing namenode ui where the backend is 
> just the jmx endpoint. To keep it as simple as possible I suggest to try to 
> avoid dedicated REST backend if possible. Later we can use REST api of 
> SCM/KSM if they will be implemented. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12005) Ozone: Web interface for SCM

2017-08-07 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116767#comment-16116767
 ] 

Elek, Marton commented on HDFS-12005:
-

The current plan is to share the common part of the KSM web interface with the 
SCM (jvm args, uptime, RPC latency).

As a first step I would add the following SCM specific stats:

* Size openContainers from the BlockManager
* Table of the Nodes of the NodesManager
* Aggregated NodeStats from NodeManager

> Ozone: Web interface for SCM
> 
>
> Key: HDFS-12005
> URL: https://issues.apache.org/jira/browse/HDFS-12005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> This is a propsal about how a web interface could be implemented for SCM (and 
> later for KSM) similar to the namenode ui.
> 1. JS framework
> There are three big option here. 
> A.) One is to use a full featured web framework with all the webpack/npm 
> minify/uglify magic. Build time the webpack/npm scripts should be run and the 
> result will be added to the jar file. 
> B.) It could be simplified if the generated minified/uglified js files are 
> added to the project on commit time. It requires an additional step for every 
> new patch (to generate the new minified javascripts) but doesn't require 
> additional JS build tools during the build.
> C.) The third option is to make it as simple as possible similar to the 
> current namenode ui which uses javascript but every dependency is commited 
> (without JS minify/uglify and other preprocessing).
> I prefer to the third one as:
>  * I have seen a lot of problems during frequent builds od older tez-ui 
> versions (bower version mismatch, npm version mismatch, npm transitive 
> dependency problems, proxy problem with older versions). All they could be 
> fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
> build step the hdfs projects build could be kept more simple.
>  * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
> doesn't require more sophisticated model. (Eg. we don't need JS require as we 
> need only a few controllers)
>  * HDFS developers mostly backend developers and not JS developers
> 2. Frameworks 
> The big advantages of a more modern JS framework is the simplified 
> programming model (for example with two way databinding) I suggest to use a 
> more modern framework (not just jquery) which supports plain js (not just 
> ECMA2015/2016/typescript) and just include the required js files in the 
> projects (similar to the included bootstrap or as the existing namenode ui 
> works). 
>  
>   * React could be a good candidate but it requires more library as it's just 
> a ui framework, even the REST calls need separated library. It could be used 
> with plain javascript instead of JSX and classes but not straightforward, and 
> it's more verbose.
>  
>   * Ember is used in yarnui2 but the main strength of the ember is the CLI 
> which couldn't be used for the simplified approach easily. I think ember is 
> best with the A.) option
>   * Angular 1 is a good candidate (but not so fancy). In case of angular 1 
> the component based approach should be used (in that case later it could be 
> easier to migrate to angular 2 or react)
>   * The mainstream side of Angular 2 uses typescript, it could work with 
> plain JS but it could require additional knowledge, most of the tutorials and 
> documentation shows the typescript approach.
> I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
> need to emulate JSX with function calls, simple HTML templates could be used.
> 3. Backend
> I would prefer the approach of the existing namenode ui where the backend is 
> just the jmx endpoint. To keep it as simple as possible I suggest to try to 
> avoid dedicated REST backend if possible. Later we can use REST api of 
> SCM/KSM if they will be implemented. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056613#comment-16056613
 ] 

Elek, Marton commented on HDFS-12002:
-

+1: I suggest to print out the exception message in case of any error:

{code}
/../../hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
index c0d3651b0bb..733fc6f7b3d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
@@ -105,6 +105,8 @@ public static void main(String[] argv) throws Exception {
 try {
   res = ToolRunner.run(shell, argv);
 } catch (Exception ex) {
+  System.err.println("ERROR: " + ex.getMessage());
   System.exit(1);
 }
 System.exit(res);
{code}

I had no hadoop-site.xml (it's not required to run scm/ksm/namenode/datanode) 
and currently without that the cli couldn't be started. (And no error message 
at all).

> Ozone : SCM cli misc fixes/improvements
> ---
>
> Key: HDFS-12002
> URL: https://issues.apache.org/jira/browse/HDFS-12002
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: ozone
>
>
> Currently there are a few minor issues with the SCM CLI:
> 1. some commands do not use -c option to take container name. an issue with 
> this is that arguments need to be in a certain order to be correctly parsed, 
> e.g.:
> {{./bin/hdfs scm -container -del c0 -f}} works, but
> {{./bin/hdfs scm -container -del -f c0}} will not
> A more important thing is that, since -del requires the following argument 
> being container name, if someone types {{./bin/hdfs scm -container -del 
> -help}} it will be an error, while we probably want to display a help message 
> instead.
> 2.some subcommands are not displaying the errors in the best way it could be, 
> e.g.:
> {{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
> So cli complains 
> {code}
> Missing argument for option: del
> Unrecognized options:[-container, -del]
> usage: hdfs scm  []
> where  can be one of the following
>  -container   Container related options
> {code}
> but this does not really show that it is container name it is missing
> 3. probably better to rename -del to -delete to be consistent with other 
> commands like -create and -info
> 4. when passing in invalid argument e.g. -info on a non-existing container, 
> an exception will be displayed. We probably should not scare the users, and 
> only display just one error message. And move the exception display to debug 
> mode display or something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12005) Ozone: Web interface for SCM

2017-06-20 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12005:
---

 Summary: Ozone: Web interface for SCM
 Key: HDFS-12005
 URL: https://issues.apache.org/jira/browse/HDFS-12005
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


This is a propsal about how a web interface could be implemented for SCM (and 
later for KSM) similar to the namenode ui.

1. JS framework

There are three big option here. 

A.) One is to use a full featured web framework with all the webpack/npm 
minify/uglify magic. Build time the webpack/npm scripts should be run and the 
result will be added to the jar file. 

B.) It could be simplified if the generated minified/uglified js files are 
added to the project on commit time. It requires an additional step for every 
new patch (to generate the new minified javascripts) but doesn't require 
additional JS build tools during the build.

C.) The third option is to make it as simple as possible similar to the current 
namenode ui which uses javascript but every dependency is commited (without JS 
minify/uglify and other preprocessing).

I prefer to the third one as:

 * I have seen a lot of problems during frequent builds od older tez-ui 
versions (bower version mismatch, npm version mismatch, npm transitive 
dependency problems, proxy problem with older versions). All they could be 
fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
build step the hdfs projects build could be kept more simple.

 * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
doesn't require more sophisticated model. (Eg. we don't need JS require as we 
need only a few controllers)

 * HDFS developers mostly backend developers and not JS developers

2. Frameworks 

The big advantages of a more modern JS framework is the simplified programming 
model (for example with two way databinding) I suggest to use a more modern 
framework (not just jquery) which supports plain js (not just 
ECMA2015/2016/typescript) and just include the required js files in the 
projects (similar to the included bootstrap or as the existing namenode ui 
works). 
 
  * React could be a good candidate but it requires more library as it's just a 
ui framework, even the REST calls need separated library. It could be used with 
plain javascript instead of JSX and classes but not straightforward, and it's 
more verbose.
 
  * Ember is used in yarnui2 but the main strength of the ember is the CLI 
which couldn't be used for the simplified approach easily. I think ember is 
best with the A.) option

  * Angular 1 is a good candidate (but not so fancy). In case of angular 1 the 
component based approach should be used (in that case later it could be easier 
to migrate to angular 2 or react)

  * The mainstream side of Angular 2 uses typescript, it could work with plain 
JS but it could require additional knowledge, most of the tutorials and 
documentation shows the typescript approach.

I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
need to emulate JSX with function calls, simple HTML templates could be used.

3. Backend

I would prefer the approach of the existing namenode ui where the backend is 
just the jmx endpoint. To keep it as simple as possible I suggest to try to 
avoid dedicated REST backend if possible. Later we can use REST api of SCM/KSM 
if they will be implemented. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-21 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Target Version/s: HDFS-7240
   Fix Version/s: (was: HDFS-7240)

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-21 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Status: Patch Available  (was: Open)

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-21 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Attachment: HDFS-12007-HDFS-7240.001.patch

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12007-HDFS-7240.001.patch
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Attachment: HDFS-12007-HDFS-7240.002.patch

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-21 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058558#comment-16058558
 ] 

Elek, Marton commented on HDFS-12007:
-

Still working on the style/testing issues. (I tested with 
dev-support/bin/test-patch but the result was different.)

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Attachment: HDFS-12007-HDFS-7240.003.patch

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Attachment: HDFS-12007-HDFS-7240.004.patch

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Attachment: HDFS-12007-HDFS-7240.006.patch

Rebased.

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, HDFS-12007-HDFS-7240.005.patch, 
> HDFS-12007-HDFS-7240.006.patch, Screen Shot 2017-06-22 at 10.28.05 PM.png, 
> Screen Shot 2017-06-22 at 10.28.32 PM.png, Screen Shot 2017-06-22 at 10.28.48 
> PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061285#comment-16061285
 ] 

Elek, Marton commented on HDFS-12007:
-

Thanks the feedback/hints. Yes, I should include the hadoop.css as well not 
just the bootstrap. But the web ui could be improved on the following JIRAs.

Please hold on with merge, I would like to add the configuration to the 
ozone-default.xml to be compatible with HDFS-11990 / HDFS-12023 

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, Screen Shot 2017-06-22 at 10.28.05 PM.png, 
> Screen Shot 2017-06-22 at 10.28.32 PM.png, Screen Shot 2017-06-22 at 10.28.48 
> PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061269#comment-16061269
 ] 

Elek, Marton commented on HDFS-12023:
-

Thanks the hint, TestConfigurationFieldsBase is exactly what I needed. Updated 
the patch.

It also shows if the defaults are different:

I increased the OZONE_SCM_HANDLER_COUNT_DEFAULT to 20 (as it was defined in 
ozone-default)

But I couldn't decide if the handler type should be fixed or not. (As I know 
the local is only for testing)

{code}
ozone-default.xml has 1 properties that do not match the default Config value
  XML Property: ozone.handler.type
  XML Value:local
  Config Name:  OZONE_HANDLER_TYPE_DEFAULT
  Config Value: distributed
{code}



> Ozone: test if all the configuration keys documented in ozone-defaults.xml
> --
>
> Key: HDFS-12023
> URL: https://issues.apache.org/jira/browse/HDFS-12023
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: test
> Attachments: HDFS-12023-HDFS-7240.001.patch, 
> HDFS-12023-HDFS-7240.002.patch
>
>
> HDFS-11990 added the missing configuration entries the ozone-defaults.xml
> This patch contains a unit test which tests if all the configuration keys are 
> still documented.
> (constant fields of the specific configuration classes which ends with _KEY 
> should be part of the defaults.xml). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061427#comment-16061427
 ] 

Elek, Marton commented on HDFS-12007:
-

1. I added the new configuration to the defaults. In the mean time I modified 
the name of the kerberos keytab/principal to follow the convention in the 
Namenode

2. I also found that the bind host was ignored so I fixed and added an 
additional unit test (host + port is in the 'address' configuration but the 
host could be overriden with the bind-host )

3. added hadoop.css and removed the unnecessary constants.

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, HDFS-12007-HDFS-7240.005.patch, Screen Shot 
> 2017-06-22 at 10.28.05 PM.png, Screen Shot 2017-06-22 at 10.28.32 PM.png, 
> Screen Shot 2017-06-22 at 10.28.48 PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12023:

Attachment: HDFS-12023-HDFS-7240.002.patch

> Ozone: test if all the configuration keys documented in ozone-defaults.xml
> --
>
> Key: HDFS-12023
> URL: https://issues.apache.org/jira/browse/HDFS-12023
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: test
> Attachments: HDFS-12023-HDFS-7240.001.patch, 
> HDFS-12023-HDFS-7240.002.patch
>
>
> HDFS-11990 added the missing configuration entries the ozone-defaults.xml
> This patch contains a unit test which tests if all the configuration keys are 
> still documented.
> (constant fields of the specific configuration classes which ends with _KEY 
> should be part of the defaults.xml). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Attachment: HDFS-12007-HDFS-7240.005.patch

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, HDFS-12007-HDFS-7240.005.patch, Screen Shot 
> 2017-06-22 at 10.28.05 PM.png, Screen Shot 2017-06-22 at 10.28.32 PM.png, 
> Screen Shot 2017-06-22 at 10.28.48 PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12034) Ozone: Web interface for KSM

2017-06-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12034:

Description: 
This is the pair of the HDFS-12005 but it's about the web interface of the 
Ozone KSM server. I created a seperated issue to collect the required 
data/mxbean separated and handle the two web interface independent one by one.

Required data:

* TODO

  was:
This is a propsal about how a web interface could be implemented for SCM (and 
later for KSM) similar to the namenode ui.

1. JS framework

There are three big option here. 

A.) One is to use a full featured web framework with all the webpack/npm 
minify/uglify magic. Build time the webpack/npm scripts should be run and the 
result will be added to the jar file. 

B.) It could be simplified if the generated minified/uglified js files are 
added to the project on commit time. It requires an additional step for every 
new patch (to generate the new minified javascripts) but doesn't require 
additional JS build tools during the build.

C.) The third option is to make it as simple as possible similar to the current 
namenode ui which uses javascript but every dependency is commited (without JS 
minify/uglify and other preprocessing).

I prefer to the third one as:

 * I have seen a lot of problems during frequent builds od older tez-ui 
versions (bower version mismatch, npm version mismatch, npm transitive 
dependency problems, proxy problem with older versions). All they could be 
fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
build step the hdfs projects build could be kept more simple.

 * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
doesn't require more sophisticated model. (Eg. we don't need JS require as we 
need only a few controllers)

 * HDFS developers mostly backend developers and not JS developers

2. Frameworks 

The big advantages of a more modern JS framework is the simplified programming 
model (for example with two way databinding) I suggest to use a more modern 
framework (not just jquery) which supports plain js (not just 
ECMA2015/2016/typescript) and just include the required js files in the 
projects (similar to the included bootstrap or as the existing namenode ui 
works). 
 
  * React could be a good candidate but it requires more library as it's just a 
ui framework, even the REST calls need separated library. It could be used with 
plain javascript instead of JSX and classes but not straightforward, and it's 
more verbose.
 
  * Ember is used in yarnui2 but the main strength of the ember is the CLI 
which couldn't be used for the simplified approach easily. I think ember is 
best with the A.) option

  * Angular 1 is a good candidate (but not so fancy). In case of angular 1 the 
component based approach should be used (in that case later it could be easier 
to migrate to angular 2 or react)

  * The mainstream side of Angular 2 uses typescript, it could work with plain 
JS but it could require additional knowledge, most of the tutorials and 
documentation shows the typescript approach.

I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
need to emulate JSX with function calls, simple HTML templates could be used.

3. Backend

I would prefer the approach of the existing namenode ui where the backend is 
just the jmx endpoint. To keep it as simple as possible I suggest to try to 
avoid dedicated REST backend if possible. Later we can use REST api of SCM/KSM 
if they will be implemented. 



> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data:
> * TODO



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12034) Ozone: Web interface for KSM

2017-06-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12034:

Description: 
This is the pair of the HDFS-12005 but it's about the web interface of the 
Ozone KSM server. I created a seperated issue to collect the required 
data/mxbean separated and handle the two web interface independent one by one.

Required data (Work in progress):

* KSMMetrics data (numVolumeCreates, numVolumeModifes)
* Available volumes (similar to the file browser of the namenode web ui)
* Available buckets (per volumes)
* Available keys (per buckets)

  was:
This is the pair of the HDFS-12005 but it's about the web interface of the 
Ozone KSM server. I created a seperated issue to collect the required 
data/mxbean separated and handle the two web interface independent one by one.

Required data:

* TODO


> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12034) Ozone: Web interface for KSM

2017-06-24 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12034:
---

 Summary: Ozone: Web interface for KSM
 Key: HDFS-12034
 URL: https://issues.apache.org/jira/browse/HDFS-12034
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


This is a propsal about how a web interface could be implemented for SCM (and 
later for KSM) similar to the namenode ui.

1. JS framework

There are three big option here. 

A.) One is to use a full featured web framework with all the webpack/npm 
minify/uglify magic. Build time the webpack/npm scripts should be run and the 
result will be added to the jar file. 

B.) It could be simplified if the generated minified/uglified js files are 
added to the project on commit time. It requires an additional step for every 
new patch (to generate the new minified javascripts) but doesn't require 
additional JS build tools during the build.

C.) The third option is to make it as simple as possible similar to the current 
namenode ui which uses javascript but every dependency is commited (without JS 
minify/uglify and other preprocessing).

I prefer to the third one as:

 * I have seen a lot of problems during frequent builds od older tez-ui 
versions (bower version mismatch, npm version mismatch, npm transitive 
dependency problems, proxy problem with older versions). All they could be 
fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
build step the hdfs projects build could be kept more simple.

 * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
doesn't require more sophisticated model. (Eg. we don't need JS require as we 
need only a few controllers)

 * HDFS developers mostly backend developers and not JS developers

2. Frameworks 

The big advantages of a more modern JS framework is the simplified programming 
model (for example with two way databinding) I suggest to use a more modern 
framework (not just jquery) which supports plain js (not just 
ECMA2015/2016/typescript) and just include the required js files in the 
projects (similar to the included bootstrap or as the existing namenode ui 
works). 
 
  * React could be a good candidate but it requires more library as it's just a 
ui framework, even the REST calls need separated library. It could be used with 
plain javascript instead of JSX and classes but not straightforward, and it's 
more verbose.
 
  * Ember is used in yarnui2 but the main strength of the ember is the CLI 
which couldn't be used for the simplified approach easily. I think ember is 
best with the A.) option

  * Angular 1 is a good candidate (but not so fancy). In case of angular 1 the 
component based approach should be used (in that case later it could be easier 
to migrate to angular 2 or react)

  * The mainstream side of Angular 2 uses typescript, it could work with plain 
JS but it could require additional knowledge, most of the tutorials and 
documentation shows the typescript approach.

I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
need to emulate JSX with function calls, simple HTML templates could be used.

3. Backend

I would prefer the approach of the existing namenode ui where the backend is 
just the jmx endpoint. To keep it as simple as possible I suggest to try to 
avoid dedicated REST backend if possible. Later we can use REST api of SCM/KSM 
if they will be implemented. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12021) Ozone: Documentation: Add Ozone-defaults documentation

2017-06-23 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061818#comment-16061818
 ] 

Elek, Marton commented on HDFS-12021:
-

For me (as a user) the Getting Started Guide -- which explains a selected set 
of the settings -- was very usefull. So (for me) it's enough to read 
GettingStarted about the most important setings and I can use the 
ozone-default.xml as a reference.

Maybe an other one could be useful which explains the most important settings 
from the tuning/operations point of view.

> Ozone: Documentation: Add Ozone-defaults  documentation
> ---
>
> Key: HDFS-12021
> URL: https://issues.apache.org/jira/browse/HDFS-12021
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
> Attachments: hadoop_doc_front.jpg
>
>
> We need to add documentation about the settings that are exposed via 
> ozone-defaults.xml
> Since ozone is new, we might have to put some extra effort into this to make 
> it easy to understand. In other words, we should write a proper doc 
> explaining what these settings mean and the rationale of various values we 
> choose, instead of a table with lots of settings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11963) Ozone: Documentation: Add getting started page

2017-06-19 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054667#comment-16054667
 ] 

Elek, Marton commented on HDFS-11963:
-

Just a typo, but could avoid easy copy paste:

{code}
- ./hdfs --deamon start scm
- ./hdfs --deamon start ksm
{code}

Should be _daemon_

> Ozone: Documentation: Add getting started page
> --
>
> Key: HDFS-11963
> URL: https://issues.apache.org/jira/browse/HDFS-11963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-11963-HDFS-7240.001.patch, 
> HDFS-11963-HDFS-7240.002.patch, HDFS-11963-HDFS-7240.003.patch, 
> HDFS-11963-HDFS-7240.004.patch, Screen Shot 2017-06-11 at 12.11.06 AM.png, 
> Screen Shot 2017-06-11 at 12.11.19 AM.png, Screen Shot 2017-06-11 at 12.11.32 
> AM.png
>
>
> We need to add the Ozone section to hadoop documentation and also a section 
> on how to get started, that is what are configuration settings needed to 
> start running ozone. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11999) Ozone: Clarify error message in case namenode is missing

2017-06-19 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-11999:
---

 Summary: Ozone: Clarify error message in case namenode is missing
 Key: HDFS-11999
 URL: https://issues.apache.org/jira/browse/HDFS-11999
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


Datanode is failing if namenode config setting is missing even for Ozone with a 
confusing error message:

{code}
14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
secureMain
java.io.IOException: No services to connect (NameNodes or SCM).
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
 ~[classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
 [classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
[classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
 [classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
 [classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
 [classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896) 
[classes/:na]
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
[classes/:na]
14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with status 
1: java.io.IOException: No services to connect (NameNodes or SCM).
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11999) Ozone: Clarify startup error message of Datanode in case namenode is missing

2017-06-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11999:

Attachment: HDFS-11999.patch

> Ozone: Clarify startup error message of Datanode in case namenode is missing
> 
>
> Key: HDFS-11999
> URL: https://issues.apache.org/jira/browse/HDFS-11999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-11999.patch
>
>
> Datanode is failing if namenode config setting is missing even for Ozone with 
> a confusing error message:
> {code}
> 14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
> secureMain
> java.io.IOException: No services to connect (NameNodes or SCM).
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
>  ~[classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
> [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
> [classes/:na]
> 14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with 
> status 1: java.io.IOException: No services to connect (NameNodes or SCM).
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11999) Ozone: Clarify startup error message of Datanode in case namenode is missing

2017-06-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11999:

Status: Patch Available  (was: Open)

> Ozone: Clarify startup error message of Datanode in case namenode is missing
> 
>
> Key: HDFS-11999
> URL: https://issues.apache.org/jira/browse/HDFS-11999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-11999.patch
>
>
> Datanode is failing if namenode config setting is missing even for Ozone with 
> a confusing error message:
> {code}
> 14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
> secureMain
> java.io.IOException: No services to connect (NameNodes or SCM).
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
>  ~[classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
> [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
> [classes/:na]
> 14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with 
> status 1: java.io.IOException: No services to connect (NameNodes or SCM).
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11999) Ozone: Clarify startup error message of Datanode in case namenode is missing

2017-06-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11999:

Summary: Ozone: Clarify startup error message of Datanode in case namenode 
is missing  (was: Ozone: Clarify error message in case namenode is missing)

> Ozone: Clarify startup error message of Datanode in case namenode is missing
> 
>
> Key: HDFS-11999
> URL: https://issues.apache.org/jira/browse/HDFS-11999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> Datanode is failing if namenode config setting is missing even for Ozone with 
> a confusing error message:
> {code}
> 14:33:29.176 [main] ERROR o.a.h.hdfs.server.datanode.DataNode - Exception in 
> secureMain
> java.io.IOException: No services to connect (NameNodes or SCM).
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:168)
>  ~[classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1440)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:510) 
> [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2802)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2705)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2752)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2896)
>  [classes/:na]
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2920) 
> [classes/:na]
> 14:33:29.177 [main] INFO  org.apache.hadoop.util.ExitUtil - Exiting with 
> status 1: java.io.IOException: No services to connect (NameNodes or SCM).
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12023:
---

 Summary: Ozone: test if all the configuration keys documented in 
ozone-defaults.xml
 Key: HDFS-12023
 URL: https://issues.apache.org/jira/browse/HDFS-12023
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Elek, Marton
Assignee: Elek, Marton


HDFS-11990 added the missing configuration entries the ozone-defaults.xml

This patch contains a unit test which tests if all the configuration keys are 
still documented.

(constant fields of the specific configuration classes which ends with _KEY 
should be part of the defaults.xml). 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12023:

Attachment: HDFS-12023-HDFS-7240.001.patch

> Ozone: test if all the configuration keys documented in ozone-defaults.xml
> --
>
> Key: HDFS-12023
> URL: https://issues.apache.org/jira/browse/HDFS-12023
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: test
> Attachments: HDFS-12023-HDFS-7240.001.patch
>
>
> HDFS-11990 added the missing configuration entries the ozone-defaults.xml
> This patch contains a unit test which tests if all the configuration keys are 
> still documented.
> (constant fields of the specific configuration classes which ends with _KEY 
> should be part of the defaults.xml). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12023:

Status: Patch Available  (was: Open)

> Ozone: test if all the configuration keys documented in ozone-defaults.xml
> --
>
> Key: HDFS-12023
> URL: https://issues.apache.org/jira/browse/HDFS-12023
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: test
> Attachments: HDFS-12023-HDFS-7240.001.patch
>
>
> HDFS-11990 added the missing configuration entries the ozone-defaults.xml
> This patch contains a unit test which tests if all the configuration keys are 
> still documented.
> (constant fields of the specific configuration classes which ends with _KEY 
> should be part of the defaults.xml). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-21 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12007:
---

 Summary: Ozone: Enable HttpServer2 for SCM and KSM
 Key: HDFS-12007
 URL: https://issues.apache.org/jira/browse/HDFS-12007
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: HDFS-7240


As a first step toward a webui for KSM/SCM we need to start and stop a 
HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode it 
could be done with a small wrapper class, but praticly it could be done with a 
common super class to avoid duplicated code between KSM/SCM.

As a result of this issue, we will have a listening web server in both KSM/SCM 
with a simple template page and with all the default servlets 
(conf/jmx/logLevel/stack).

The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes

2017-05-05 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDFS-11600:
---

Assignee: Elek, Marton

> Refactor TestDFSStripedOutputStreamWithFailure test classes
> ---
>
> Key: HDFS-11600
> URL: https://issues.apache.org/jira/browse/HDFS-11600
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Elek, Marton
>Priority: Minor
>
> TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The 
> tests are parameterized based on the name of these subclasses.
> Seems like we could parameterize these tests with JUnit and then not need all 
> these separate test classes.
> Another note, the tests will randomly return instead of running the test. 
> Using {{Assume}} instead would make it more clear in the test output that 
> these tests were skipped.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes

2017-05-05 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11600:

Status: Patch Available  (was: Open)

> Refactor TestDFSStripedOutputStreamWithFailure test classes
> ---
>
> Key: HDFS-11600
> URL: https://issues.apache.org/jira/browse/HDFS-11600
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-11600-1.patch
>
>
> TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The 
> tests are parameterized based on the name of these subclasses.
> Seems like we could parameterize these tests with JUnit and then not need all 
> these separate test classes.
> Another note, the tests will randomly return instead of running the test. 
> Using {{Assume}} instead would make it more clear in the test output that 
> these tests were skipped.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes

2017-05-05 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-11600:

Attachment: HDFS-11600-1.patch

> Refactor TestDFSStripedOutputStreamWithFailure test classes
> ---
>
> Key: HDFS-11600
> URL: https://issues.apache.org/jira/browse/HDFS-11600
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-11600-1.patch
>
>
> TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The 
> tests are parameterized based on the name of these subclasses.
> Seems like we could parameterize these tests with JUnit and then not need all 
> these separate test classes.
> Another note, the tests will randomly return instead of running the test. 
> Using {{Assume}} instead would make it more clear in the test output that 
> these tests were skipped.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12464) Ozone Documentation: More deteailed documentation about the ozone components

2017-09-15 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12464:

Attachment: HDFS-7240-HDFS-12464.001.patch

Work in progress. I would like to add more information about how the data is 
stored on the datanode.

But in the mean, feel free to comment the proposed structure.

> Ozone Documentation: More deteailed documentation about the ozone components
> 
>
> Key: HDFS-12464
> URL: https://issues.apache.org/jira/browse/HDFS-12464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-7240-HDFS-12464.001.patch
>
>
> I started to write a more detailed introduction about the Ozone components. 
> The goal is to explain the basic responsibility of the components and the 
> basic network topology (which components sends messages and to where?). 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12464) Ozone Documentation: More deteailed documentation about the ozone components

2017-09-15 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12464:
---

 Summary: Ozone Documentation: More deteailed documentation about 
the ozone components
 Key: HDFS-12464
 URL: https://issues.apache.org/jira/browse/HDFS-12464
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: HDFS-7240
Affects Versions: HDFS-7240
Reporter: Elek, Marton


I started to write a more detailed introduction about the Ozone components. The 
goal is to explain the basic responsibility of the components and the basic 
network topology (which components sends messages and to where?). 

 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12464) Ozone Documentation: More deteailed documentation about the ozone components

2017-09-15 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDFS-12464:
---

Assignee: Elek, Marton

> Ozone Documentation: More deteailed documentation about the ozone components
> 
>
> Key: HDFS-12464
> URL: https://issues.apache.org/jira/browse/HDFS-12464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> I started to write a more detailed introduction about the Ozone components. 
> The goal is to explain the basic responsibility of the components and the 
> basic network topology (which components sends messages and to where?). 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12468) Ozone: fix hard coded version in the Ozone GettingStarted guide

2017-09-15 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12468:

Attachment: HDFS-12468-HDFS-7240.001.patch

> Ozone: fix hard coded version in the Ozone GettingStarted guide
> ---
>
> Key: HDFS-12468
> URL: https://issues.apache.org/jira/browse/HDFS-12468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12468-HDFS-7240.001.patch
>
>
> In the current OzoneGettingStarted guide there is a hard coded version 
> (3.0.0-alpha4 currently).
> With renaming a file to .md.vm the site plugin will use filtering and 
> ${project.version} could be used to always get the actual versrion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12468) Ozone: fix hard coded version in the Ozone GettingStarted guide

2017-09-15 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12468:
---

 Summary: Ozone: fix hard coded version in the Ozone GettingStarted 
guide
 Key: HDFS-12468
 URL: https://issues.apache.org/jira/browse/HDFS-12468
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: HDFS-7240


In the current OzoneGettingStarted guide there is a hard coded version 
(3.0.0-alpha4 currently).

With renaming a file to .md.vm the site plugin will use filtering and 
${project.version} could be used to always get the actual versrion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12468) Ozone: fix hard coded version in the Ozone GettingStarted guide

2017-09-15 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12468:

Status: Patch Available  (was: Open)

> Ozone: fix hard coded version in the Ozone GettingStarted guide
> ---
>
> Key: HDFS-12468
> URL: https://issues.apache.org/jira/browse/HDFS-12468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12468-HDFS-7240.001.patch
>
>
> In the current OzoneGettingStarted guide there is a hard coded version 
> (3.0.0-alpha4 currently).
> With renaming a file to .md.vm the site plugin will use filtering and 
> ${project.version} could be used to always get the actual versrion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12468) Ozone: fix hard coded version in the Ozone GettingStarted guide

2017-09-15 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168002#comment-16168002
 ] 

Elek, Marton commented on HDFS-12468:
-

To test:

{code}
/hadoop-hdfs-project/hadoop-hdfs 
mvn clean
mvn site
firefox target/site/OzoneGettingStarted.html
{code}

The right version should be there at the first part of the documentation.


> Ozone: fix hard coded version in the Ozone GettingStarted guide
> ---
>
> Key: HDFS-12468
> URL: https://issues.apache.org/jira/browse/HDFS-12468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12468-HDFS-7240.001.patch
>
>
> In the current OzoneGettingStarted guide there is a hard coded version 
> (3.0.0-alpha4 currently).
> With renaming a file to .md.vm the site plugin will use filtering and 
> ${project.version} could be used to always get the actual versrion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-15 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12469:

Attachment: HDFS-12469-HDFS-7240.WIP1.patch

Work in progress patch. Not for merge, yet.

I adjusted the base image as originally I created this with a more simple 
baseimage (for example KSM/SCM started with root user in the container, but now 
it has been changed).

TODO items (will be fixed soon):

1. Test it on OSX (I tested it on linux)
2. Adjust the configuration (some settings contains localhost, should be fixed) 
3. Add documentation. I am wondering which one is the better: extend the 
GettingStarted guide or write a separated one.
4. Try out the commands from the GettingStartedGuide + Corona run

If you would like to test it now, you can

{code}
cd dev-support/compose/ozone
docker-compose up -d  
{code}

And check the namenode: 0.0.0.0:9870

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-15 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12469:
---

 Summary: Ozone: Create docker-compose definition to easily test 
real clusters
 Key: HDFS-12469
 URL: https://issues.apache.org/jira/browse/HDFS-12469
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: HDFS-7240


The goal here is to create a docker-compose definition for ozone pseudo-cluster 
with docker (one component per container). 

Ideally after a full build the ozone cluster could be started easily with after 
a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12464) Ozone: More detailed documentation about the ozone components

2017-09-15 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12464:

Summary: Ozone: More detailed documentation about the ozone components  
(was: Ozone Documentation: More deteailed documentation about the ozone 
components)

> Ozone: More detailed documentation about the ozone components
> -
>
> Key: HDFS-12464
> URL: https://issues.apache.org/jira/browse/HDFS-12464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-7240-HDFS-12464.001.patch
>
>
> I started to write a more detailed introduction about the Ozone components. 
> The goal is to explain the basic responsibility of the components and the 
> basic network topology (which components sends messages and to where?). 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12469:

Attachment: HDFS-12469-HDFS-7240.WIP2.patch

Second, 'almost done' version.

I refactored the Getting Started guide:

 * I added different methods to run ozone cluster (1. docker + predefined 
image, 2. docker + build from source, 3. shell script + pre-built release (will 
be useful after the 3.1 release) 4. build from source + shell script 
 * I moved out the configuration to a separated page and provided just the 
minimal configuration for every use case

TODO:
 * Test it with on OSX
 * Read all the GettingStarted instructions against and test the commands
 * Add the new file to the site menu

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12475) Ozone : add document for port sharing with WebHDFS

2017-09-19 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16172241#comment-16172241
 ] 

Elek, Marton commented on HDFS-12475:
-

In HDFS-12469 I plan to move out the Configuration part from the GettingStarted 
guide to a separated page. Currently it contains only the most critical 
configuration but we can add this information to there (after HDFS-12469 will 
be resolved) and a few other information about the possible configuration.

(Additionall to the OzoneCommandShell.md)

> Ozone : add document for port sharing with WebHDFS
> --
>
> Key: HDFS-12475
> URL: https://issues.apache.org/jira/browse/HDFS-12475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Lokesh Jain
>  Labels: ozoneDoc
>
> Currently Ozone's REST API uses the port 9864, all commands mentioned in 
> OzoneCommandShell.md use the address localhost:9864.
> This port was used by WebHDFS and is now shared by Ozone. The value is 
> controlled by the config key {{dfs.datanode.http.address}}. We should 
> document this information in {{OzoneCommandShell.md}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI

2017-09-19 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDFS-12477:
---

Assignee: Elek, Marton

> Ozone: Some minor text improvement in SCM web UI
> 
>
> Key: HDFS-12477
> URL: https://issues.apache.org/jira/browse/HDFS-12477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: scm, ui
>Reporter: Weiwei Yang
>Assignee: Elek, Marton
>Priority: Trivial
> Attachments: haskey.png, healthy_nodes_place.png, Revise text.png
>
>
> While trying out SCM UI, there seems to have some small text problems, 
> bq. Node Manager: Minimum chill mode nodes)
> It has an extra ).
> bq. $$hashKey object:9
> I am not really sure what does this mean? Would this help?
> bq. Node counts
> Can we place the HEALTHY ones at the top of the table?
> bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 
> nodes have reported in.
> Can we refine this text a bit?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-21 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175531#comment-16175531
 ] 

Elek, Marton commented on HDFS-12469:
-

While I read the Zeppelin jira about the containerization of Zeppelin I found 
the release policy:

https://www.apache.org/dev/release-distribution.html#unreleased

So If I understood well the unreleased builds should be handled different. 
Maybe it would be better to leave the end-user documentation to the source code 
(Getting Started based on binary builds pre predefined images) and move the 
Getting Started guide for developers (start ozone after build or with docker 
but from the source code) to the wiki 

Any thoughts, [~anu]?

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12527) javadoc: error - class file for org.apache.http.annotation.ThreadSafe not found

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177140#comment-16177140
 ] 

Elek, Marton commented on HDFS-12527:
-

The problem is that the httpclient and httpcore versions are incompatible.

We have two version definition in hadoop-project/pom.xml:
 
{code}

org.apache.httpcomponents
httpclient
4.5.2
  
  
org.apache.httpcomponents
httpcore
4.4.4
  
{code}

The problem is that the second one is a dependency of the first one. We should 
define exactly the same version in the second one which is used by the first 
one. 

The problem was that both the versions are bumped (HADOOP-14654 and 
HADOOP-14655) but HADOOP-14655 is reverted. Now we use a different httpcore 
version and not the one which is used by httpclient 4.5.2 by default. So the 
two jiras should be applyed together or reverted together.

> javadoc: error - class file for org.apache.http.annotation.ThreadSafe not 
> found
> ---
>
> Key: HDFS-12527
> URL: https://issues.apache.org/jira/browse/HDFS-12527
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mukul Kumar Singh
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:jar (module-javadocs) on 
> project hadoop-hdfs-client: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - 
> /Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java:694:
>  warning - Tag @link: reference not found: StripingCell
> [ERROR] javadoc: error - class file for org.apache.http.annotation.ThreadSafe 
> not found
> [ERROR] 
> [ERROR] Command line was: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/../bin/javadoc
>  -J-Xmx768m @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/target/api' 
> dir.
> {code}
> To reproduce the error above, run
> {code}
> mvn package -Pdist -DskipTests -DskipDocs -Dtar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176161#comment-16176161
 ] 

Elek, Marton commented on HDFS-12469:
-

Oh, my suggest was not to avoid the docker based approach just

1. We wouldn't like upload the snapshot images to the docker hub.

2. Most probably we need to separated the GettingStarted guide: the first part 
(for developers) could be moved the developer wiki, the other part will be read 
by the enduser, this should be part of the source code but will be valid only 
after the first Ozone release. I checked the existing documentation and there 
was no information about builds, at least not in the generated site.

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI

2017-09-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12477:

Status: Patch Available  (was: Open)

> Ozone: Some minor text improvement in SCM web UI
> 
>
> Key: HDFS-12477
> URL: https://issues.apache.org/jira/browse/HDFS-12477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: scm, ui
>Reporter: Weiwei Yang
>Assignee: Elek, Marton
>Priority: Trivial
> Attachments: haskey.png, HDFS-12477-HDFS-7240.000.patch, 
> healthy_nodes_place.png, Revise text.png
>
>
> While trying out SCM UI, there seems to have some small text problems, 
> bq. Node Manager: Minimum chill mode nodes)
> It has an extra ).
> bq. $$hashKey object:9
> I am not really sure what does this mean? Would this help?
> bq. Node counts
> Can we place the HEALTHY ones at the top of the table?
> bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 
> nodes have reported in.
> Can we refine this text a bit?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176282#comment-16176282
 ] 

Elek, Marton commented on HDFS-12477:
-

Thanks the feedback [~cheersyang]

1. ) is removed
2. $hashKey is removed (FTR: we display all the JMX keys, but we have a 
blacklist to hide the unmeaningfull keys)
3. I defined an order for the node statuses HEALTHY will be on the top and 
UNKNOWN on the bottom.

I can't fix the Chill Mode status on the web ui side as it's not defined in the 
web ui but in the 
./src/main/java/org/apache/hadoop/ozone/scm/node/SCMNodeManager.java. 

I suggest to fix it there in a separated issue. 

> Ozone: Some minor text improvement in SCM web UI
> 
>
> Key: HDFS-12477
> URL: https://issues.apache.org/jira/browse/HDFS-12477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: scm, ui
>Reporter: Weiwei Yang
>Assignee: Elek, Marton
>Priority: Trivial
> Attachments: haskey.png, HDFS-12477-HDFS-7240.000.patch, 
> healthy_nodes_place.png, Revise text.png
>
>
> While trying out SCM UI, there seems to have some small text problems, 
> bq. Node Manager: Minimum chill mode nodes)
> It has an extra ).
> bq. $$hashKey object:9
> I am not really sure what does this mean? Would this help?
> bq. Node counts
> Can we place the HEALTHY ones at the top of the table?
> bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 
> nodes have reported in.
> Can we refine this text a bit?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI

2017-09-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12477:

Labels: ozoneMerge  (was: )

> Ozone: Some minor text improvement in SCM web UI
> 
>
> Key: HDFS-12477
> URL: https://issues.apache.org/jira/browse/HDFS-12477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: scm, ui
>Reporter: Weiwei Yang
>Assignee: Elek, Marton
>Priority: Trivial
>  Labels: ozoneMerge
> Attachments: haskey.png, HDFS-12477-HDFS-7240.000.patch, 
> healthy_nodes_place.png, Revise text.png
>
>
> While trying out SCM UI, there seems to have some small text problems, 
> bq. Node Manager: Minimum chill mode nodes)
> It has an extra ).
> bq. $$hashKey object:9
> I am not really sure what does this mean? Would this help?
> bq. Node counts
> Can we place the HEALTHY ones at the top of the table?
> bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 
> nodes have reported in.
> Can we refine this text a bit?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176165#comment-16176165
 ] 

Elek, Marton commented on HDFS-12469:
-

And I will create non-official non-apache  ozone image snapshots (Actually I 
alread created it: https://hub.docker.com/r/flokkr/hadoop/tags/) Which could be 
used by developers and users to test the actual state very easily. 

I will add the usage both for the wiki and the source code (site), but in the 
source code/site it will be replaced later by the official hadoop image from 
https://issues.apache.org/jira/browse/HADOOP-14898

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI

2017-09-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12477:

Attachment: HDFS-12477-HDFS-7240.000.patch

> Ozone: Some minor text improvement in SCM web UI
> 
>
> Key: HDFS-12477
> URL: https://issues.apache.org/jira/browse/HDFS-12477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: scm, ui
>Reporter: Weiwei Yang
>Assignee: Elek, Marton
>Priority: Trivial
> Attachments: haskey.png, HDFS-12477-HDFS-7240.000.patch, 
> healthy_nodes_place.png, Revise text.png
>
>
> While trying out SCM UI, there seems to have some small text problems, 
> bq. Node Manager: Minimum chill mode nodes)
> It has an extra ).
> bq. $$hashKey object:9
> I am not really sure what does this mean? Would this help?
> bq. Node counts
> Can we place the HEALTHY ones at the top of the table?
> bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 
> nodes have reported in.
> Can we refine this text a bit?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12464) Ozone: More detailed documentation about the ozone components

2017-09-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12464:

Labels: ozoneMerge  (was: )

> Ozone: More detailed documentation about the ozone components
> -
>
> Key: HDFS-12464
> URL: https://issues.apache.org/jira/browse/HDFS-12464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: ozoneMerge
> Attachments: HDFS-12464-HDFS-7240.001.patch, 
> HDFS-7240-HDFS-12464.001.patch
>
>
> I started to write a more detailed introduction about the Ozone components. 
> The goal is to explain the basic responsibility of the components and the 
> basic network topology (which components sends messages and to where?). 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12464) Ozone: More detailed documentation about the ozone components

2017-09-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12464:

Attachment: HDFS-12464-HDFS-7240.001.patch

First usable version is uploaded. Please check the validity of the statements 
about Ozone (and feel free to fix the language problems, if you see any...)

> Ozone: More detailed documentation about the ozone components
> -
>
> Key: HDFS-12464
> URL: https://issues.apache.org/jira/browse/HDFS-12464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12464-HDFS-7240.001.patch, 
> HDFS-7240-HDFS-12464.001.patch
>
>
> I started to write a more detailed introduction about the Ozone components. 
> The goal is to explain the basic responsibility of the components and the 
> basic network topology (which components sends messages and to where?). 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12464) Ozone: More detailed documentation about the ozone components

2017-09-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12464:

Status: Patch Available  (was: Open)

> Ozone: More detailed documentation about the ozone components
> -
>
> Key: HDFS-12464
> URL: https://issues.apache.org/jira/browse/HDFS-12464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12464-HDFS-7240.001.patch, 
> HDFS-7240-HDFS-12464.001.patch
>
>
> I started to write a more detailed introduction about the Ozone components. 
> The goal is to explain the basic responsibility of the components and the 
> basic network topology (which components sends messages and to where?). 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12469:

Status: Patch Available  (was: Open)

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch, HDFS-12477-HDFS-7240.000.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >