[jira] [Commented] (HDDS-1553) Add metrics in rack aware container placement policy

2019-08-07 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901786#comment-16901786
 ] 

Junjie Chen commented on HDDS-1553:
---

[~Sammi], I have no time recently, you can continue on this.

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1553) Add metrics in rack aware container placement policy

2019-08-07 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen reassigned HDDS-1553:
-

Assignee: Sammi Chen  (was: Junjie Chen)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-06 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen reassigned HDDS-1894:
-

Assignee: Li Cheng  (was: Junjie Chen)

Hi Timmy

Could you please help to take a look at  this please, I have no time recently. 

> Support listPipelines by filters in scmcli
> --
>
> Key: HDDS-1894
> URL: https://issues.apache.org/jira/browse/HDDS-1894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Li Cheng
>Priority: Major
>
> Today scmcli has a subcmd that allow list all pipelines. This ticket is 
> opened to filter the results by switches, e.g., filter by Factor: THREE and 
> State: OPEN. This will be useful for trouble shooting in large cluster.
>  
> {code}
> bin/ozone scmcli listPipelines
> Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
> cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
> ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:ONE, State:OPEN]
> Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
> 0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
> ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
> ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
> ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:THREE, State:OPEN]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1587) Support dynamically adding delegated class to filteredclass loader

2019-05-30 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen reassigned HDDS-1587:
-

Assignee: Junjie Chen  (was: Xiaoyu Yao)

> Support dynamically adding delegated class to filteredclass loader
> --
>
> Key: HDDS-1587
> URL: https://issues.apache.org/jira/browse/HDDS-1587
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Junjie Chen
>Priority: Major
>
> HDDS-922 added a filtered class loader with a list of delegated classes that 
> will be loaded with the app launcher's classloader. With security enabled on 
> ozone-0.4, there are some incompatible changes from Hadoop-common and 
> hadoop-auth module from Hadoop-2.x to Hadoop-3.x. Some examples can be seen 
> HDDS-1080, where the fix has to be made along with a rebuild/release. 
>  
> This ticket is opened to allow dynamically adding delegated classes or class 
> prefix via environment variable. This way, we can easily adjust the setting 
> in different deployment without rebuild/release.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-976) Support YAML format network topology cluster definition

2019-03-29 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804756#comment-16804756
 ] 

Junjie Chen commented on HDDS-976:
--

[~xyao]

I open an early PR,  please have a look.

> Support YAML format network topology cluster definition
> ---
>
> Key: HDDS-976
> URL: https://issues.apache.org/jira/browse/HDDS-976
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: NetworkTopologyDefault.yaml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-01-22 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748482#comment-16748482
 ] 

Junjie Chen commented on HDDS-699:
--

Thanks Sammi, 

Just finish basic first round, some comments below:

{code:java}
+// Remove any trailing NetConf.PATH_SEPARATOR
+int len = path.length();
+while (len > 0 && path.charAt(len-1) == NetConf.PATH_SEPARATOR) {
+  path = path.substring(0, len-1);
+  len = path.length();
+}
+return path;
+  }
{code}

this could be replaced by :
{code:java}
 path.replaceAll(NetConf.PATH_SEPARATOR_STR+ "+$", "");
{code}

Please also add doc for public APIs:

{code:java}
+  public static int locationToDepth(String location) {
+String normalizedLocation = normalize(location);
+return normalizedLocation.split(NetConf.PATH_SEPARATOR_STR).length;
+  }

{code}


For choosing a node (randomly or not), do we really need ancestor parameter?  
The scope should already contain the branch level info. Isn't it? 

{code:java}
+  /**
+   * Randomly choose a leaf node.
+   *
+   * @param scope range from which a node will be chosen, cannot start with ~
+   * @param excludedNodes nodes to be excluded
+   * @param excludedScope excluded node range. Cannot start with ~
+   * @param ancestorGen matters when excludeNodes is not null. It means the
+   * ancestor generation that's not allowed to share between chosen node and 
the
+   * excludedNodes. For example, if ancestorGen is 1, means chosen node
+   * cannot share the same parent with excludeNodes. If value is 2, cannot
+   * share the same grand parent, and so on. If ancestorGen is 0, then no
+   * effect.
+   *
+   * @return the chosen node
+   */
+  public Node chooseRandom(String scope, String excludedScope,
+  Collection excludedNodes, int ancestorGen) {

{code}


typo availabel->available.
{code:java}
+  int availabelCount = scopeNode instanceof InnerNode ?
+  ((InnerNode)scopeNode).getNumOfLeaves() - excludedCount :
+  1 - excludedCount;
+  Preconditions.checkState(availabelCount >= 0);
+  return availabelCount;

{code}




> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-976) Support YAML format network topology cluster definition

2019-01-18 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-976:
-
Attachment: NetworkTopologyDefault.yaml

> Support YAML format network topology cluster definition
> ---
>
> Key: HDDS-976
> URL: https://issues.apache.org/jira/browse/HDDS-976
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Junjie Chen
>Priority: Major
> Attachments: NetworkTopologyDefault.yaml
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-320) Failed to start container with apache/hadoop-runner image.

2018-09-10 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610033#comment-16610033
 ] 

Junjie Chen commented on HDDS-320:
--

I tried on another centos 7  machine. The issue still exists.

Can someone show your docker version and docker-compose version if you can 
successfully run docker-compose up -d? 

docker-compose version 1.22.0, build f46880fe   


  
Docker version 18.09.0-ce-beta1, build 78a6bdb  
  


> Failed to start container with apache/hadoop-runner image.
> --
>
> Key: HDDS-320
> URL: https://issues.apache.org/jira/browse/HDDS-320
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: centos 7.4
>Reporter: Junjie Chen
>Priority: Minor
>
> Following the doc in hadoop-ozone/doc/content/GettingStarted.md, the 
> docker-compose up -d step failed, the error list list below:
> [root@VM_16_5_centos ozone]# docker-compose logs
> Attaching to ozone_scm_1, ozone_datanode_1, ozone_ozoneManager_1
> datanode_1  | Traceback (most recent call last):
> datanode_1  |   File "/opt/envtoconf.py", line 104, in 
> datanode_1  | Simple(sys.argv[1:]).main()
> datanode_1  |   File "/opt/envtoconf.py", line 93, in main
> datanode_1  | self.process_envs()
> datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
> datanode_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> datanode_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw'
> datanode_1  | Traceback (most recent call last):
> datanode_1  |   File "/opt/envtoconf.py", line 104, in 
> datanode_1  | Simple(sys.argv[1:]).main()
> datanode_1  |   File "/opt/envtoconf.py", line 93, in main
> datanode_1  | self.process_envs()
> datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
> datanode_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> ozoneManager_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw'
> ozoneManager_1  | Traceback (most recent call last):
> ozoneManager_1  |   File "/opt/envtoconf.py", line 104, in 
> ozoneManager_1  | Simple(sys.argv[1:]).main()
> ozoneManager_1  |   File "/opt/envtoconf.py", line 93, in main
> ozoneManager_1  | self.process_envs()
> ozoneManager_1  |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> ozoneManager_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> scm_1   |   File "/opt/envtoconf.py", line 104, in
>  
> scm_1   | Simple(sys.argv[1:]).main()
> scm_1   |   File "/opt/envtoconf.py", line 93, in main
> scm_1   | self.process_envs()
> scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> scm_1   | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> scm_1   | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> scm_1   |   File "/opt/envtoconf.py", line 104, in
>  
> scm_1   | Simple(sys.argv[1:]).main()
> scm_1   |   File "/opt/envtoconf.py", line 93, in main
> scm_1   | self.process_envs()
> scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> scm_1   | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> scm_1   | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> 

[jira] [Updated] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-23 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-317:
-
Attachment: HDDS-317.4.patch

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.3.patch, HDDS-317.4.patch, 
> HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-23 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-317:
-
Attachment: HDDS-317.3.patch

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.3.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-22 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589586#comment-16589586
 ] 

Junjie Chen commented on HDDS-317:
--

Thanks [~xyao] and [~anu], will update in next patch.

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-22 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16588584#comment-16588584
 ] 

Junjie Chen commented on HDDS-317:
--

Some tests pass locally but failed on CI, such as TestFreon.  

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-22 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-317:
-
Attachment: HDDS-317.2.patch

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.2.patch, HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-21 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-317:
-
Attachment: HDDS-317.patch

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-21 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-317:
-
Attachment: HDDS-317.patch

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-21 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-317:
-
Attachment: (was: HDDS-317.patch)

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-21 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-317:
-
Attachment: (was: HDDS-317.patch)

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-21 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen updated HDDS-317:
-
Attachment: HDDS-317.patch
Status: Patch Available  (was: Open)

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-317.patch
>
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-320) Failed to start container with apache/hadoop-runner image.

2018-08-21 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16587321#comment-16587321
 ] 

Junjie Chen commented on HDDS-320:
--

my os version is CentOS Linux release 7.4.1708 (Core).

> Failed to start container with apache/hadoop-runner image.
> --
>
> Key: HDDS-320
> URL: https://issues.apache.org/jira/browse/HDDS-320
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: centos 7.4
>Reporter: Junjie Chen
>Priority: Minor
>
> Following the doc in hadoop-ozone/doc/content/GettingStarted.md, the 
> docker-compose up -d step failed, the error list list below:
> [root@VM_16_5_centos ozone]# docker-compose logs
> Attaching to ozone_scm_1, ozone_datanode_1, ozone_ozoneManager_1
> datanode_1  | Traceback (most recent call last):
> datanode_1  |   File "/opt/envtoconf.py", line 104, in 
> datanode_1  | Simple(sys.argv[1:]).main()
> datanode_1  |   File "/opt/envtoconf.py", line 93, in main
> datanode_1  | self.process_envs()
> datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
> datanode_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> datanode_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw'
> datanode_1  | Traceback (most recent call last):
> datanode_1  |   File "/opt/envtoconf.py", line 104, in 
> datanode_1  | Simple(sys.argv[1:]).main()
> datanode_1  |   File "/opt/envtoconf.py", line 93, in main
> datanode_1  | self.process_envs()
> datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
> datanode_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> ozoneManager_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw'
> ozoneManager_1  | Traceback (most recent call last):
> ozoneManager_1  |   File "/opt/envtoconf.py", line 104, in 
> ozoneManager_1  | Simple(sys.argv[1:]).main()
> ozoneManager_1  |   File "/opt/envtoconf.py", line 93, in main
> ozoneManager_1  | self.process_envs()
> ozoneManager_1  |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> ozoneManager_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> scm_1   |   File "/opt/envtoconf.py", line 104, in
>  
> scm_1   | Simple(sys.argv[1:]).main()
> scm_1   |   File "/opt/envtoconf.py", line 93, in main
> scm_1   | self.process_envs()
> scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> scm_1   | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> scm_1   | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> scm_1   |   File "/opt/envtoconf.py", line 104, in
>  
> scm_1   | Simple(sys.argv[1:]).main()
> scm_1   |   File "/opt/envtoconf.py", line 93, in main
> scm_1   | self.process_envs()
> scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> scm_1   | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> scm_1   | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> scm_1   |   File "/opt/envtoconf.py", line 104, in
>  
> scm_1   | Simple(sys.argv[1:]).main()
> scm_1   |   File "/opt/envtoconf.py", line 93, in main
> scm_1   | self.process_envs()
> scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> scm_1   | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as 

[jira] [Created] (HDDS-320) Failed to start container with apache/hadoop-runner image.

2018-08-02 Thread Junjie Chen (JIRA)
Junjie Chen created HDDS-320:


 Summary: Failed to start container with apache/hadoop-runner image.
 Key: HDDS-320
 URL: https://issues.apache.org/jira/browse/HDDS-320
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: document
 Environment: centos 7.4
Reporter: Junjie Chen


Following the doc in hadoop-ozone/doc/content/GettingStarted.md, the 
docker-compose up -d step failed, the error list list below:
[root@VM_16_5_centos ozone]# docker-compose logs
Attaching to ozone_scm_1, ozone_datanode_1, ozone_ozoneManager_1
datanode_1  | Traceback (most recent call last):
datanode_1  |   File "/opt/envtoconf.py", line 104, in 
datanode_1  | Simple(sys.argv[1:]).main()
datanode_1  |   File "/opt/envtoconf.py", line 93, in main
datanode_1  | self.process_envs()
datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
datanode_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:
datanode_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'
datanode_1  | Traceback (most recent call last):
datanode_1  |   File "/opt/envtoconf.py", line 104, in 
datanode_1  | Simple(sys.argv[1:]).main()
datanode_1  |   File "/opt/envtoconf.py", line 93, in main
datanode_1  | self.process_envs()
datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
datanode_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:

ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:
ozoneManager_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'
ozoneManager_1  | Traceback (most recent call last):
ozoneManager_1  |   File "/opt/envtoconf.py", line 104, in 
ozoneManager_1  | Simple(sys.argv[1:]).main()
ozoneManager_1  |   File "/opt/envtoconf.py", line 93, in main
ozoneManager_1  | self.process_envs()
ozoneManager_1  |   File "/opt/envtoconf.py", line 67, in process_envs  
   
ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
ozoneManager_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'   

my docker-compose version is:
docker-compose version 1.22.0, build f46880fe

docker images:
apache/hadoop-runner   latest  569314fd9a735 weeks ago  
   646MB

>From the Dockerfile, we can see " chown hadoop /opt" command. It looks like we 
>need a "-R " here?





--
This 

[jira] [Commented] (HDDS-286) Fix NodeReportPublisher.getReport NPE

2018-08-02 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566588#comment-16566588
 ] 

Junjie Chen commented on HDDS-286:
--

Hi Xiaoyu

I can't reproduce this with latest trunk with command " mvn test 
-Dtest=TestKeys -Phdds". Please see logs following: 

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.ozone.web.client.TestKeys
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.371 
s - in org.apache.hadoop.ozone.web.client.TestKeys
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0

Could you please elaborate?


> Fix NodeReportPublisher.getReport NPE
> -
>
> Key: HDDS-286
> URL: https://issues.apache.org/jira/browse/HDDS-286
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> This can be reproed with TestKeys#testPutKey
> {code}
> 2018-07-23 21:33:55,598 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 0: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:350)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:260)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org