Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-24 Thread John Fulton

On 01/23/2017 05:07 AM, Saravanan KR wrote:

Thanks John for the info.

I am going through the spec in detail. And before that, I had few
thoughts about how I wanted to approach this, which I have drafted in
https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
100% ready yet, I was still working on it.


Awesome. Thank you Saravanan for taking the time to review this. I
made some updates in the etherpad above.


As of now, there are few differences on top of my mind, which I want
to highlight, I am still going through the specs in detail:

* Profiles vs Features - Considering a overcloud node as a profiles
rather than a node which can host these features, would have
limitations to it. For example, if i need a Compute node to host both
Ceph (OSD) and DPDK, then the node will have multiple profiles or we
have to create a profile like -
hci_enterprise_many_small_vms_with_dpdk? The first one is not
appropriate and the later is not scaleable, may be something else in
your mind?


Why is the later not scaleable? It's analogous to composable roles.

With Composable Roles, if I want HCI, which is made from the Compute
and CephStorage roles, then I add a name for the new role and list
services they should have by borrowing from the examples shipped
in openstack-tripleo-heat-templates/roles_data.yaml. So I could call
my new role role OsdCompute and make:


https://github.com/RHsyseng/hci/blob/master/custom-templates/custom-roles.yaml#L168-L193

Similarly, if I want to make a new profile, then I give it a name and
then combine what I want. E.g. if the workload profiles file had
hci_throughput like this:

  hci_throughput:
workload::average_guest_flavor: 'm1.large'
workload::average_guest_memory_size_in_mb: 8192
workload::average_guest_CPU_utilization_percentage: 90
workload::tuned_profile: 'throughput-performance'

The deployer could easily compose their own hci_latency profile as:

  hci_latency:
workload::average_guest_flavor: 'm1.large'
workload::average_guest_memory_size_in_mb: 8192
workload::average_guest_CPU_utilization_percentage: 90
workload::tuned_profile: 'latency-performance'

The above is a simple example, but if more parameters were modified
the ability to add multiple tunables per profile would be useful as
the deployer would just need to specify they want just that name and
they'd get the other params that came with it. So, I was not
suggesting we ship every possible profile but that we provide a few
proven examples of known use-cases and also have one example with all
of them for CI tests (similar to CI for composable roles).

I think the above fits in with the notion of tags that you mentioned in
that they can be combined, e.g. "dpdk,osd" vs "sriov,osd".  The
difference is that the deployer could give any combination of them a
name. As the number of inputs for derived parameters grows, so does
the benefit of a name to refer to a set of them.

Perhaps the templates should not be for "Workload Profiles" but for
"Derived THT" and those templates should call different functions.
Then some of those functions would include derivations to optimize for
different workloads while other functions would make derivations for
DPDK or SRIOV deploys. Something like:

  hci_dpdk:
derive::workload::average_guest_flavor: 'm1.large'
derive::workload::average_guest_memory_size_in_mb: 8192
derive::workload::average_guest_CPU_utilization_percentage: 90
derive::tag::network: 'dpdk'

In something like the above, you could implement and support the tags
you described, and a user would not need to use performance profiles.
They could just include a new THT env file, e.g. derived_parmas.yaml,
and indicate which of the many derivable parameters they want to use.

What do you think of exposing the tags to the user as in the above?


* Independent - The initial plan of this was to be independent
execution, also can be added to deploy if needed.


I agree.


* Not to expose/duplicate parameters which are straight forward,
for example tuned-profile name should be associated with feature
internally, Workflows will decide it.


By "straight forward" do you mean non-derived?

I'd prefer to allow an advanced deployer to compose a performance
profile with whatever performance tweaks they need. So, I'd put
tuned in a workload profile because if I want to tune my overcloud for
that workload then I expect it to have the appropriate tuned profile.

I see a few ways this could go. Given tags or profiles, I think we
both want a way to refer to a set of parameters with a simple name
and we want either composability within the name or the ability to
combine more than one name in a deployment. However I see two options:

A. Do we want to only derive parameters that we think must be derived
and require users to manually set non-derived ones outside of this
spec?

B. Do we want to allow for any parameter to be derived if we unify
those parameters under a name and offer workflow that 

Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-24 Thread John Fulton

On 01/24/2017 12:45 AM, Saravanan KR wrote:

Thanks Giulio for adding it to PTG discussion pad. I am not yet sure
of my presence in PTG. Hoping that things will fall in place soon.

We have spent a considerable about of time in moving from static roles
to composable roles. If we are planning to introduce static profiles,
then after a while we will end up with the same problem, and
definitely, it actually depends on how the features will be composed
on a role. Looking forward.


Hi Saravanan,

I wasn't planning to introduce static profiles. What's proposed
in the spec [1] is for the profiles to be easily composed so I
mimicked the composable roles pattern. I will reply to your message
from the 23rd with more details and an example.

  John

[1] https://review.openstack.org/#/c/423304/


On Mon, Jan 23, 2017 at 6:25 PM, Giulio Fidente  wrote:

On 01/23/2017 11:07 AM, Saravanan KR wrote:

Thanks John for the info.

I am going through the spec in detail. And before that, I had few
thoughts about how I wanted to approach this, which I have drafted in
https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
100% ready yet, I was still working on it.


I've linked this etherpad for the session we'll have at the PTG


As of now, there are few differences on top of my mind, which I want
to highlight, I am still going through the specs in detail:
* Profiles vs Features - Considering a overcloud node as a profiles
rather than a node which can host these features, would have
limitations to it. For example, if i need a Compute node to host both
Ceph (OSD) and DPDK, then the node will have multiple profiles or we
have to create a profile like -
hci_enterprise_many_small_vms_with_dpdk? The first one is not
appropriate and the later is not scaleable, may be something else in
your mind?
* Independent - The initial plan of this was to be independent
execution, also can be added to deploy if needed.
* Not to expose/duplicate parameters which are straight forward, for
example tuned-profile name should be associated with feature
internally, Workflows will decide it.


for all of the above, I think we need to decide if we want the
optimizations to be profile-based and gathered *before* the overcloud
deployment is started or if we want to set these values during the
overcloud deployment basing on the data we have at runtime

seems like both approaches have pros and cons and this would be a good
conversation to have with more people at the PTG


* And another thing, which I couldn't get is, where will the workflow
actions be defined, in THT or tripleo_common?


to me it sounds like executing the workflows before stack creation is
started would be fine, at least for the initial phase

running workflows from Heat depends on the other blueprint/session we'll
have about the WorkflowExecution resource and once that will be
available, we could trigger the workflow execution from tht if beneficial


The requirements which I thought of, for deriving workflow are:
Parameter Deriving workflow should be
* independent to run the workflow
* take basic parameters inputs, for easy deployment, keep very minimal
set of mandatory parameters, and rest as optional parameters
* read introspection data from Ironic DB and Swift-stored blob

I will add these comments as starting point on the spec. We will work
towards bringing down the differences, so that operators headache is
reduced to a greater extent.


thanks

--
Giulio Fidente
GPG KEY: 08D733BA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Saravanan KR
Thanks Giulio for adding it to PTG discussion pad. I am not yet sure
of my presence in PTG. Hoping that things will fall in place soon.

We have spent a considerable about of time in moving from static roles
to composable roles. If we are planning to introduce static profiles,
then after a while we will end up with the same problem, and
definitely, it actually depends on how the features will be composed
on a role. Looking forward.

Regards,
Saravanan KR

On Mon, Jan 23, 2017 at 6:25 PM, Giulio Fidente  wrote:
> On 01/23/2017 11:07 AM, Saravanan KR wrote:
>> Thanks John for the info.
>>
>> I am going through the spec in detail. And before that, I had few
>> thoughts about how I wanted to approach this, which I have drafted in
>> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
>> 100% ready yet, I was still working on it.
>
> I've linked this etherpad for the session we'll have at the PTG
>
>> As of now, there are few differences on top of my mind, which I want
>> to highlight, I am still going through the specs in detail:
>> * Profiles vs Features - Considering a overcloud node as a profiles
>> rather than a node which can host these features, would have
>> limitations to it. For example, if i need a Compute node to host both
>> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
>> have to create a profile like -
>> hci_enterprise_many_small_vms_with_dpdk? The first one is not
>> appropriate and the later is not scaleable, may be something else in
>> your mind?
>> * Independent - The initial plan of this was to be independent
>> execution, also can be added to deploy if needed.
>> * Not to expose/duplicate parameters which are straight forward, for
>> example tuned-profile name should be associated with feature
>> internally, Workflows will decide it.
>
> for all of the above, I think we need to decide if we want the
> optimizations to be profile-based and gathered *before* the overcloud
> deployment is started or if we want to set these values during the
> overcloud deployment basing on the data we have at runtime
>
> seems like both approaches have pros and cons and this would be a good
> conversation to have with more people at the PTG
>
>> * And another thing, which I couldn't get is, where will the workflow
>> actions be defined, in THT or tripleo_common?
>
> to me it sounds like executing the workflows before stack creation is
> started would be fine, at least for the initial phase
>
> running workflows from Heat depends on the other blueprint/session we'll
> have about the WorkflowExecution resource and once that will be
> available, we could trigger the workflow execution from tht if beneficial
>
>> The requirements which I thought of, for deriving workflow are:
>> Parameter Deriving workflow should be
>> * independent to run the workflow
>> * take basic parameters inputs, for easy deployment, keep very minimal
>> set of mandatory parameters, and rest as optional parameters
>> * read introspection data from Ironic DB and Swift-stored blob
>>
>> I will add these comments as starting point on the spec. We will work
>> towards bringing down the differences, so that operators headache is
>> reduced to a greater extent.
>
> thanks
>
> --
> Giulio Fidente
> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Giulio Fidente
On 01/23/2017 11:07 AM, Saravanan KR wrote:
> Thanks John for the info.
> 
> I am going through the spec in detail. And before that, I had few
> thoughts about how I wanted to approach this, which I have drafted in
> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
> 100% ready yet, I was still working on it.

I've linked this etherpad for the session we'll have at the PTG

> As of now, there are few differences on top of my mind, which I want
> to highlight, I am still going through the specs in detail:
> * Profiles vs Features - Considering a overcloud node as a profiles
> rather than a node which can host these features, would have
> limitations to it. For example, if i need a Compute node to host both
> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
> have to create a profile like -
> hci_enterprise_many_small_vms_with_dpdk? The first one is not
> appropriate and the later is not scaleable, may be something else in
> your mind?
> * Independent - The initial plan of this was to be independent
> execution, also can be added to deploy if needed.
> * Not to expose/duplicate parameters which are straight forward, for
> example tuned-profile name should be associated with feature
> internally, Workflows will decide it.

for all of the above, I think we need to decide if we want the
optimizations to be profile-based and gathered *before* the overcloud
deployment is started or if we want to set these values during the
overcloud deployment basing on the data we have at runtime

seems like both approaches have pros and cons and this would be a good
conversation to have with more people at the PTG

> * And another thing, which I couldn't get is, where will the workflow
> actions be defined, in THT or tripleo_common?

to me it sounds like executing the workflows before stack creation is
started would be fine, at least for the initial phase

running workflows from Heat depends on the other blueprint/session we'll
have about the WorkflowExecution resource and once that will be
available, we could trigger the workflow execution from tht if beneficial

> The requirements which I thought of, for deriving workflow are:
> Parameter Deriving workflow should be
> * independent to run the workflow
> * take basic parameters inputs, for easy deployment, keep very minimal
> set of mandatory parameters, and rest as optional parameters
> * read introspection data from Ironic DB and Swift-stored blob
> 
> I will add these comments as starting point on the spec. We will work
> towards bringing down the differences, so that operators headache is
> reduced to a greater extent.

thanks

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Saravanan KR
Thanks John for the info.

I am going through the spec in detail. And before that, I had few
thoughts about how I wanted to approach this, which I have drafted in
https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
100% ready yet, I was still working on it.

As of now, there are few differences on top of my mind, which I want
to highlight, I am still going through the specs in detail:
* Profiles vs Features - Considering a overcloud node as a profiles
rather than a node which can host these features, would have
limitations to it. For example, if i need a Compute node to host both
Ceph (OSD) and DPDK, then the node will have multiple profiles or we
have to create a profile like -
hci_enterprise_many_small_vms_with_dpdk? The first one is not
appropriate and the later is not scaleable, may be something else in
your mind?
* Independent - The initial plan of this was to be independent
execution, also can be added to deploy if needed.
* Not to expose/duplicate parameters which are straight forward, for
example tuned-profile name should be associated with feature
internally, Workflows will decide it.
* And another thing, which I couldn't get is, where will the workflow
actions be defined, in THT or tripleo_common?


The requirements which I thought of, for deriving workflow are:
Parameter Deriving workflow should be
* independent to run the workflow
* take basic parameters inputs, for easy deployment, keep very minimal
set of mandatory parameters, and rest as optional parameters
* read introspection data from Ironic DB and Swift-stored blob

I will add these comments as starting point on the spec. We will work
towards bringing down the differences, so that operators headache is
reduced to a greater extent.

Regards,
Saravanan KR

On Fri, Jan 20, 2017 at 9:56 PM, John Fulton  wrote:
> On 01/11/2017 11:34 PM, Saravanan KR wrote:
>>
>> Thanks John, I would really appreciate if you could tag me on the
>> reviews. I will do the same for mine too.
>
>
> Hi Saravanan,
>
> Following up on this, have a look at the OS::Mistral::WorflowExecution
> Heat spec [1] to trigger Mistral workflows. I'm hoping to use it for
> deriving THT parameters for optimal resource isolation in HCI
> deployments as I mentioned below. I have a spec [2] which describes
> the derivation of the values, but this is provided as an example for
> the more general problem of capturing the rules used to derive the
> values so that deployers may easily apply them.
>
> Thanks,
>   John
>
> [1] OS::Mistral::WorflowExecution https://review.openstack.org/#/c/267770/
> [2] TripleO Performance Profiles https://review.openstack.org/#/c/423304/
>
>> On Wed, Jan 11, 2017 at 8:03 PM, John Fulton  wrote:
>>>
>>> On 01/11/2017 12:56 AM, Saravanan KR wrote:


 Thanks Emilien and Giulio for your valuable feedback. I will start
 working towards finalizing the workbook and the actions required.
>>>
>>>
>>>
>>> Saravanan,
>>>
>>> If you can add me to the review for your workbook, I'd appreciate it. I'm
>>> trying to solve a similar problem, of computing THT params for HCI
>>> deployments in order to isolate resources between CephOSDs and
>>> NovaComputes,
>>> and I was also looking to use a Mistral workflow. I'll add you to the
>>> review
>>> of any related work, if you don't mind. Your proposal to get NUMA info
>>> into
>>> Ironic [1] helps me there too. Hope to see you at the PTG.
>>>
>>> Thanks,
>>>   John
>>>
>>> [1] https://review.openstack.org/396147
>>>
>>>
> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?


 I will come back on this, as I have not planned for it yet. If it
 works out, I will update the etherpad.

 Regards,
 Saravanan KR


 On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente 
 wrote:
>
>
> On 01/04/2017 09:13 AM, Saravanan KR wrote:
>>
>>
>>
>> Hello,
>>
>> The aim of this mail is to ease the DPDK deployment with TripleO. I
>> would like to see if the approach of deriving THT parameter based on
>> introspection data, with a high level input would be feasible.
>>
>> Let me brief on the complexity of certain parameters, which are
>> related to DPDK. Following parameters should be configured for a good
>> performing DPDK cluster:
>> * NeutronDpdkCoreList (puppet-vswitch)
>> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
>> review)
>> * NovaVcpuPinset (puppet-nova)
>>
>> * NeutronDpdkSocketMemory (puppet-vswitch)
>> * NeutronDpdkMemoryChannels (puppet-vswitch)
>> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
>> * Interface to bind DPDK driver (network config templates)
>>
>> The complexity of deciding some of these parameters is explained in
>> the blog [1], where the CPUs has to be chosen in accordance with the

Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-20 Thread John Fulton

On 01/11/2017 11:34 PM, Saravanan KR wrote:

Thanks John, I would really appreciate if you could tag me on the
reviews. I will do the same for mine too.


Hi Saravanan,

Following up on this, have a look at the OS::Mistral::WorflowExecution
Heat spec [1] to trigger Mistral workflows. I'm hoping to use it for
deriving THT parameters for optimal resource isolation in HCI
deployments as I mentioned below. I have a spec [2] which describes
the derivation of the values, but this is provided as an example for
the more general problem of capturing the rules used to derive the
values so that deployers may easily apply them.

Thanks,
  John

[1] OS::Mistral::WorflowExecution https://review.openstack.org/#/c/267770/
[2] TripleO Performance Profiles https://review.openstack.org/#/c/423304/


On Wed, Jan 11, 2017 at 8:03 PM, John Fulton  wrote:

On 01/11/2017 12:56 AM, Saravanan KR wrote:


Thanks Emilien and Giulio for your valuable feedback. I will start
working towards finalizing the workbook and the actions required.



Saravanan,

If you can add me to the review for your workbook, I'd appreciate it. I'm
trying to solve a similar problem, of computing THT params for HCI
deployments in order to isolate resources between CephOSDs and NovaComputes,
and I was also looking to use a Mistral workflow. I'll add you to the review
of any related work, if you don't mind. Your proposal to get NUMA info into
Ironic [1] helps me there too. Hope to see you at the PTG.

Thanks,
  John

[1] https://review.openstack.org/396147



would you be able to join the PTG to help us with the session on the
overcloud settings optimization?


I will come back on this, as I have not planned for it yet. If it
works out, I will update the etherpad.

Regards,
Saravanan KR


On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente 
wrote:


On 01/04/2017 09:13 AM, Saravanan KR wrote:



Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.




hi, I am not an expert, I think John (on CC) knows more but this looks
like
a good initial step to me.

once we have the workbook in good shape, we could probably integrate it
in
the tripleo client/common to (optionally) trigger it before every
deployment

would you be able to join the PTG to help us with the session on the
overcloud settings optimization?

https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-12 Thread Jiri Tomasek



On 4.1.2017 09:13, Saravanan KR wrote:

Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.

This workflow will ease the TripleO UI need to integrate DPDK, as UI
(user) has to choose only the interface for DPDK [and optionally, the
number for CPUs required for PMD and Host]. Of-course, the
introspection should be completed, with which, it will be easy to
deploy a DPDK cluster.

There is a complexity if the cluster contains heterogeneous nodes, for
example a cluster having HP and DELL machines with different CPU
layout, we need to enhance the workflow to take actions based on
roles/nodes, which brings in a requirement of localizing the above
mentioned variables per role. For now, consider this proposal for
homogeneous cluster, if there is a value in this, I will work towards
heterogeneous clusters too.

Please share your thoughts.

Regards,
Saravanan KR


[1] https://krsacme.github.io/blog/post/dpdk-pmd-cpu-list/
[2] https://review.openstack.org/#/c/396147/
[3] https://gist.github.com/krsacme/c5be089d6fa216232d49c85082478419
[4] 
https://review.openstack.org/#/c/411797/6/extraconfig/pre_network/host_config_and_reboot.role.j2.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We are recently getting quite a lot of requests such as this - of 
bringing up the logic which takes the introspection data and 
pre-populates the parameters with it. This is usable for network 
configuration, storage etc. So as It seems there is a real need for such 
features, TripleO team should discuss general approach on how this logic 
should work. Mistral workflow is an obvious choice, we just need to make 
sure a certain pre-requisities are met.


From the GUI point of view, we probably don't want this type of 
workflow to happen as part of starting the deployment. That's too late. 
We need to find mechanism which helps us to identify when such workflow 
can run and it should probably be confirmed by user. And when it 
finishes, Used needs to be able to review those parameters and confirm 
that this is the configuration he wants to deploy and should be able to 
make changes to it.


Obviously, as this workflow uses introspection data, user could be 
offered to run it when introspection finishes. Problem is that we need 
to verify, that using this workflow is valid for the deployment setup 
user is creating. For example, If this workflow sets parameters which 
are defined in templates which user won't deploy, it is wrong.


So I think that proper way would be to embed this in environment selection:
Environment selection is a step where user does high level deployment 
decisions - selects environments which are going to be used for 
deployment. We could bring in a mechanism (embedded in environment file 
or capabilities-map.yaml maybe?) which would allow GUI to do: 'hey, 
you've just enabled feature Foo, and you have introspection data 
available. Do you wish to pre-configure this feature using this data?' 
On confirmation the workflow is triggered and configuration is 
populated. User reviews it and does tweaks if he wants.


I'd love to hear feedback on this.

--Jirka



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-11 Thread Saravanan KR
Thanks John, I would really appreciate if you could tag me on the
reviews. I will do the same for mine too.

Regards,
Saravanan KR

On Wed, Jan 11, 2017 at 8:03 PM, John Fulton  wrote:
> On 01/11/2017 12:56 AM, Saravanan KR wrote:
>>
>> Thanks Emilien and Giulio for your valuable feedback. I will start
>> working towards finalizing the workbook and the actions required.
>
>
> Saravanan,
>
> If you can add me to the review for your workbook, I'd appreciate it. I'm
> trying to solve a similar problem, of computing THT params for HCI
> deployments in order to isolate resources between CephOSDs and NovaComputes,
> and I was also looking to use a Mistral workflow. I'll add you to the review
> of any related work, if you don't mind. Your proposal to get NUMA info into
> Ironic [1] helps me there too. Hope to see you at the PTG.
>
> Thanks,
>   John
>
> [1] https://review.openstack.org/396147
>
>
>>> would you be able to join the PTG to help us with the session on the
>>> overcloud settings optimization?
>>
>> I will come back on this, as I have not planned for it yet. If it
>> works out, I will update the etherpad.
>>
>> Regards,
>> Saravanan KR
>>
>>
>> On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente 
>> wrote:
>>>
>>> On 01/04/2017 09:13 AM, Saravanan KR wrote:


 Hello,

 The aim of this mail is to ease the DPDK deployment with TripleO. I
 would like to see if the approach of deriving THT parameter based on
 introspection data, with a high level input would be feasible.

 Let me brief on the complexity of certain parameters, which are
 related to DPDK. Following parameters should be configured for a good
 performing DPDK cluster:
 * NeutronDpdkCoreList (puppet-vswitch)
 * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
 review)
 * NovaVcpuPinset (puppet-nova)

 * NeutronDpdkSocketMemory (puppet-vswitch)
 * NeutronDpdkMemoryChannels (puppet-vswitch)
 * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
 * Interface to bind DPDK driver (network config templates)

 The complexity of deciding some of these parameters is explained in
 the blog [1], where the CPUs has to be chosen in accordance with the
 NUMA node associated with the interface. We are working a spec [2], to
 collect the required details from the baremetal via the introspection.
 The proposal is to create mistral workbook and actions
 (tripleo-common), which will take minimal inputs and decide the actual
 value of parameters based on the introspection data. I have created
 simple workbook [3] with what I have in mind (not final, only
 wireframe). The expected output of this workflow is to return the list
 of inputs for "parameter_defaults",  which will be used for the
 deployment. I would like to hear from the experts, if there is any
 drawbacks with this approach or any other better approach.
>>>
>>>
>>>
>>> hi, I am not an expert, I think John (on CC) knows more but this looks
>>> like
>>> a good initial step to me.
>>>
>>> once we have the workbook in good shape, we could probably integrate it
>>> in
>>> the tripleo client/common to (optionally) trigger it before every
>>> deployment
>>>
>>> would you be able to join the PTG to help us with the session on the
>>> overcloud settings optimization?
>>>
>>> https://etherpad.openstack.org/p/tripleo-ptg-pike
>>> --
>>> Giulio Fidente
>>> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-11 Thread John Fulton

On 01/11/2017 12:56 AM, Saravanan KR wrote:

Thanks Emilien and Giulio for your valuable feedback. I will start
working towards finalizing the workbook and the actions required.


Saravanan,

If you can add me to the review for your workbook, I'd appreciate it. 
I'm trying to solve a similar problem, of computing THT params for HCI 
deployments in order to isolate resources between CephOSDs and 
NovaComputes, and I was also looking to use a Mistral workflow. I'll add 
you to the review of any related work, if you don't mind. Your proposal 
to get NUMA info into Ironic [1] helps me there too. Hope to see you at 
the PTG.


Thanks,
  John

[1] https://review.openstack.org/396147


would you be able to join the PTG to help us with the session on the
overcloud settings optimization?

I will come back on this, as I have not planned for it yet. If it
works out, I will update the etherpad.

Regards,
Saravanan KR


On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente  wrote:

On 01/04/2017 09:13 AM, Saravanan KR wrote:


Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.



hi, I am not an expert, I think John (on CC) knows more but this looks like
a good initial step to me.

once we have the workbook in good shape, we could probably integrate it in
the tripleo client/common to (optionally) trigger it before every deployment

would you be able to join the PTG to help us with the session on the
overcloud settings optimization?

https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Saravanan KR
Thanks Emilien and Giulio for your valuable feedback. I will start
working towards finalizing the workbook and the actions required.

> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?
I will come back on this, as I have not planned for it yet. If it
works out, I will update the etherpad.

Regards,
Saravanan KR


On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente  wrote:
> On 01/04/2017 09:13 AM, Saravanan KR wrote:
>>
>> Hello,
>>
>> The aim of this mail is to ease the DPDK deployment with TripleO. I
>> would like to see if the approach of deriving THT parameter based on
>> introspection data, with a high level input would be feasible.
>>
>> Let me brief on the complexity of certain parameters, which are
>> related to DPDK. Following parameters should be configured for a good
>> performing DPDK cluster:
>> * NeutronDpdkCoreList (puppet-vswitch)
>> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
>> review)
>> * NovaVcpuPinset (puppet-nova)
>>
>> * NeutronDpdkSocketMemory (puppet-vswitch)
>> * NeutronDpdkMemoryChannels (puppet-vswitch)
>> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
>> * Interface to bind DPDK driver (network config templates)
>>
>> The complexity of deciding some of these parameters is explained in
>> the blog [1], where the CPUs has to be chosen in accordance with the
>> NUMA node associated with the interface. We are working a spec [2], to
>> collect the required details from the baremetal via the introspection.
>> The proposal is to create mistral workbook and actions
>> (tripleo-common), which will take minimal inputs and decide the actual
>> value of parameters based on the introspection data. I have created
>> simple workbook [3] with what I have in mind (not final, only
>> wireframe). The expected output of this workflow is to return the list
>> of inputs for "parameter_defaults",  which will be used for the
>> deployment. I would like to hear from the experts, if there is any
>> drawbacks with this approach or any other better approach.
>
>
> hi, I am not an expert, I think John (on CC) knows more but this looks like
> a good initial step to me.
>
> once we have the workbook in good shape, we could probably integrate it in
> the tripleo client/common to (optionally) trigger it before every deployment
>
> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?
>
> https://etherpad.openstack.org/p/tripleo-ptg-pike
> --
> Giulio Fidente
> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Giulio Fidente

On 01/04/2017 09:13 AM, Saravanan KR wrote:

Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.


hi, I am not an expert, I think John (on CC) knows more but this looks 
like a good initial step to me.


once we have the workbook in good shape, we could probably integrate it 
in the tripleo client/common to (optionally) trigger it before every 
deployment


would you be able to join the PTG to help us with the session on the 
overcloud settings optimization?


https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Giulio Fidente

On 01/04/2017 09:13 AM, Saravanan KR wrote:

Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.


hi, I am not an expert, I think John (on CC) knows more but this looks 
like a good initial step to me.


once we have the workbook in good shape, we could probably integrate it 
in the tripleo client/common to (optionally) trigger it before every 
deployment


would you be able to join the PTG to help us with the session on the 
overcloud settings optimization?


https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Emilien Macchi
On Wed, Jan 4, 2017 at 3:13 AM, Saravanan KR  wrote:
> Hello,
>
> The aim of this mail is to ease the DPDK deployment with TripleO. I
> would like to see if the approach of deriving THT parameter based on
> introspection data, with a high level input would be feasible.
>
> Let me brief on the complexity of certain parameters, which are
> related to DPDK. Following parameters should be configured for a good
> performing DPDK cluster:
> * NeutronDpdkCoreList (puppet-vswitch)
> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
> * NovaVcpuPinset (puppet-nova)
>
> * NeutronDpdkSocketMemory (puppet-vswitch)
> * NeutronDpdkMemoryChannels (puppet-vswitch)
> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
> * Interface to bind DPDK driver (network config templates)
>
> The complexity of deciding some of these parameters is explained in
> the blog [1], where the CPUs has to be chosen in accordance with the
> NUMA node associated with the interface. We are working a spec [2], to
> collect the required details from the baremetal via the introspection.
> The proposal is to create mistral workbook and actions
> (tripleo-common), which will take minimal inputs and decide the actual
> value of parameters based on the introspection data. I have created
> simple workbook [3] with what I have in mind (not final, only
> wireframe). The expected output of this workflow is to return the list
> of inputs for "parameter_defaults",  which will be used for the
> deployment. I would like to hear from the experts, if there is any
> drawbacks with this approach or any other better approach.
>
> This workflow will ease the TripleO UI need to integrate DPDK, as UI
> (user) has to choose only the interface for DPDK [and optionally, the
> number for CPUs required for PMD and Host]. Of-course, the
> introspection should be completed, with which, it will be easy to
> deploy a DPDK cluster.
>
> There is a complexity if the cluster contains heterogeneous nodes, for
> example a cluster having HP and DELL machines with different CPU
> layout, we need to enhance the workflow to take actions based on
> roles/nodes, which brings in a requirement of localizing the above
> mentioned variables per role. For now, consider this proposal for
> homogeneous cluster, if there is a value in this, I will work towards
> heterogeneous clusters too.
>
> Please share your thoughts.

Using Mistral workflows for this use-case seems valuable to me. I like
your step-by-step approach and also the fact it will ease TripleO UI
with this proposal.

> Regards,
> Saravanan KR
>
>
> [1] https://krsacme.github.io/blog/post/dpdk-pmd-cpu-list/
> [2] https://review.openstack.org/#/c/396147/
> [3] https://gist.github.com/krsacme/c5be089d6fa216232d49c85082478419
> [4] 
> https://review.openstack.org/#/c/411797/6/extraconfig/pre_network/host_config_and_reboot.role.j2.yaml
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-04 Thread Saravanan KR
Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.

This workflow will ease the TripleO UI need to integrate DPDK, as UI
(user) has to choose only the interface for DPDK [and optionally, the
number for CPUs required for PMD and Host]. Of-course, the
introspection should be completed, with which, it will be easy to
deploy a DPDK cluster.

There is a complexity if the cluster contains heterogeneous nodes, for
example a cluster having HP and DELL machines with different CPU
layout, we need to enhance the workflow to take actions based on
roles/nodes, which brings in a requirement of localizing the above
mentioned variables per role. For now, consider this proposal for
homogeneous cluster, if there is a value in this, I will work towards
heterogeneous clusters too.

Please share your thoughts.

Regards,
Saravanan KR


[1] https://krsacme.github.io/blog/post/dpdk-pmd-cpu-list/
[2] https://review.openstack.org/#/c/396147/
[3] https://gist.github.com/krsacme/c5be089d6fa216232d49c85082478419
[4] 
https://review.openstack.org/#/c/411797/6/extraconfig/pre_network/host_config_and_reboot.role.j2.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev