Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Tomasz Pa
On Apr 10, 2017 1:02 PM, "John Garbutt"  wrote:

On 10 April 2017 at 11:31,  .

With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)


It's not entirely true. On Intel Rack Scale Design platform you can
attach/detach pci devices on fly.



TP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Nisha Agarwal
Hi Jay, Dmitry,

>I strongly challenge the assertion made here that inspection is only
useful in scheduling contexts.
Ok i agree that scheduling is not the only purpose of inspection but it is
one of the main aspect of inspection.

>There are users who simply want to know about their hardware, and read the
results as posted to swift.
This is true only for ironic-inspector. If we say all the features of
ironic-inspector is "OK" for ironic, then why OOB inspection not allowed to
discover same things or do same things what ironic-inspector already does.
Ironic-inspector already discovers the pci-device data in the format nova
supports. Why the features supported by ironic-inspector doesnt has to go
through ironic review for capabilities review etc. ironic-inspector does
has its own review process but doesnt centralize its approach(atleast
fields/attributes names) for ironic which is and should be a common thing
between inband inspection and out-of-band inspection.

All above is said just to emphasize that ironic-inspector is not the only
way of inspection in ironic.

> Inspection also handles discovery of new nodes when given basic
information about them.
Applies only to ironic-inspector.

> Also ironic-inspector is useful for automatically defining resource
classes on nodes, so I'm not sure about this purpose being defeated as well.
I wasnt aware that the creation of custom resource class is already
automated by ironic-inspector. If it is already there , it should be done
by ironic instead of ironic-inspector because thats required even by OOB
inspection. If the solution is there in ironic OOB inspection can also use
that for scheduling.

Regards
Nisha

On Tue, Apr 11, 2017 at 9:34 PM, Dmitry Tantsur  wrote:

> On 04/11/2017 05:28 PM, Jay Faulkner wrote:
>
>>
>> On Apr 11, 2017, at 12:54 AM, Nisha Agarwal 
>>> wrote:
>>>
>>> Hi John,
>>>
>>> With ironic I thought everything is "passed through" by default,
 because there is no virtualization in the way. (I am possibly
 incorrectly assuming no BIOS tricks to turn off or re-assign PCI
 devices dynamically.)

>>>
>>> Yes with ironic everything is passed through by default.
>>>
>>> So I am assuming this is purely a scheduling concern. If so, why are
 the new custom resource classes not good enough? "ironic_blue" could
 mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
 and one 1Gb nic, etc.
 Or is there something else that needs addressing here? Trying to
 describe what you get with each flavor to end users?

>>> Yes this is purely from scheduling perspective.
>>> Currently how ironic works is we discover server attributes and populate
>>> them into node object. These attributes are then used for further
>>> scheduling of the node from nova scheduler using ComputeCapabilities
>>> filter. So this is something automated on ironic side, like we do
>>> inspection of the node properties/attributes and user need to create the
>>> flavor of their choice and the node which meets the user need is scheduled
>>> for ironic deploy.
>>> With resource class name in place in ironic, we ask user to do a manual
>>> step i.e. create a resource class name based on the hardware attributes and
>>> this need to be done on per node basis. For this user need to know the
>>> server hardware properties in advance before assigning the resource class
>>> name to the node(s) and then assign the resource class name manually to the
>>> node.
>>> In a broad way if i say, if we want to support scheduling based on
>>> quantity for ironic nodes there is no way we can do it through current
>>> resource class structure(actually just a tag) in ironic. A  user may want
>>> to schedule ironic nodes on different resources and each resource should be
>>> a different resource class (IMO).
>>>
>>> Are you needing to aggregating similar hardware in a different way to
 the above
 resource class approach?

>>> i guess no but the above resource class approach takes away the
>>> automation on the ironic side and the whole purpose of inspection is
>>> defeated.
>>>
>>>
>> I strongly challenge the assertion made here that inspection is only
>> useful in scheduling contexts. There are users who simply want to know
>> about their hardware, and read the results as posted to swift. Inspection
>> also handles discovery of new nodes when given basic information about them.
>>
>
> Also ironic-inspector is useful for automatically defining resource
> classes on nodes, so I'm not sure about this purpose being defeated as well.
>
> /me makes a note to provide a few examples of such approach in
> ironic-inspector docs
>
> Not sure about OOB inspection though.
>
>
>
>> -
>> Jay Faulkner
>> OSIC
>>
>> Regards
>>> Nisha
>>>
>>>
>>> On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt 
>>> wrote:
>>> On 10 April 2017 at 11:31,   wrote:
>>>
 On Mon, 2017-04-10 at 11:50 

Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Dmitry Tantsur

On 04/11/2017 05:28 PM, Jay Faulkner wrote:



On Apr 11, 2017, at 12:54 AM, Nisha Agarwal  wrote:

Hi John,


With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)


Yes with ironic everything is passed through by default.


So I am assuming this is purely a scheduling concern. If so, why are
the new custom resource classes not good enough? "ironic_blue" could
mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
and one 1Gb nic, etc.
Or is there something else that needs addressing here? Trying to
describe what you get with each flavor to end users?

Yes this is purely from scheduling perspective.
Currently how ironic works is we discover server attributes and populate them 
into node object. These attributes are then used for further scheduling of the 
node from nova scheduler using ComputeCapabilities filter. So this is something 
automated on ironic side, like we do inspection of the node 
properties/attributes and user need to create the flavor of their choice and 
the node which meets the user need is scheduled for ironic deploy.
With resource class name in place in ironic, we ask user to do a manual step 
i.e. create a resource class name based on the hardware attributes and this 
need to be done on per node basis. For this user need to know the server 
hardware properties in advance before assigning the resource class name to the 
node(s) and then assign the resource class name manually to the node.
In a broad way if i say, if we want to support scheduling based on quantity for 
ironic nodes there is no way we can do it through current resource class 
structure(actually just a tag) in ironic. A  user may want to schedule ironic 
nodes on different resources and each resource should be a different resource 
class (IMO).


Are you needing to aggregating similar hardware in a different way to the above
resource class approach?

i guess no but the above resource class approach takes away the automation on 
the ironic side and the whole purpose of inspection is defeated.



I strongly challenge the assertion made here that inspection is only useful in 
scheduling contexts. There are users who simply want to know about their 
hardware, and read the results as posted to swift. Inspection also handles 
discovery of new nodes when given basic information about them.


Also ironic-inspector is useful for automatically defining resource classes on 
nodes, so I'm not sure about this purpose being defeated as well.


/me makes a note to provide a few examples of such approach in ironic-inspector 
docs

Not sure about OOB inspection though.



-
Jay Faulkner
OSIC


Regards
Nisha


On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt  wrote:
On 10 April 2017 at 11:31,   wrote:

On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:

Hi team,

Please could you pour in your suggestions on the mail?

I raised a blueprint in Nova for this https://blueprints.launchpad.ne
t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
pad.net/ironic/+bug/1681320 for the discussion topic.


If I understand you correctly, you want to be able to filter ironic
hosts by available PCI device, correct? Barring any possibility that
resource providers could do this for you yet, extending the nova ironic
driver to use the PCI passthrough filter sounds like the way to go.


With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)

So I am assuming this is purely a scheduling concern. If so, why are
the new custom resource classes not good enough? "ironic_blue" could
mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
and one 1Gb nic, etc.

Or is there something else that needs addressing here? Trying to
describe what you get with each flavor to end users? Are you needing
to aggregating similar hardware in a different way to the above
resource class approach?

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Jay Faulkner

> On Apr 11, 2017, at 12:54 AM, Nisha Agarwal  
> wrote:
> 
> Hi John,
> 
> >With ironic I thought everything is "passed through" by default,
> >because there is no virtualization in the way. (I am possibly
> >incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> >devices dynamically.)
> 
> Yes with ironic everything is passed through by default. 
> 
> >So I am assuming this is purely a scheduling concern. If so, why are
> >the new custom resource classes not good enough? "ironic_blue" could
> >mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> >and one 1Gb nic, etc.
> >Or is there something else that needs addressing here? Trying to
> >describe what you get with each flavor to end users?
> Yes this is purely from scheduling perspective. 
> Currently how ironic works is we discover server attributes and populate them 
> into node object. These attributes are then used for further scheduling of 
> the node from nova scheduler using ComputeCapabilities filter. So this is 
> something automated on ironic side, like we do inspection of the node 
> properties/attributes and user need to create the flavor of their choice and 
> the node which meets the user need is scheduled for ironic deploy.
> With resource class name in place in ironic, we ask user to do a manual step 
> i.e. create a resource class name based on the hardware attributes and this 
> need to be done on per node basis. For this user need to know the server 
> hardware properties in advance before assigning the resource class name to 
> the node(s) and then assign the resource class name manually to the node. 
> In a broad way if i say, if we want to support scheduling based on quantity 
> for ironic nodes there is no way we can do it through current resource class 
> structure(actually just a tag) in ironic. A  user may want to schedule ironic 
> nodes on different resources and each resource should be a different resource 
> class (IMO). 
> 
> >Are you needing to aggregating similar hardware in a different way to the 
> >above
> >resource class approach?
> i guess no but the above resource class approach takes away the automation on 
> the ironic side and the whole purpose of inspection is defeated.
> 

I strongly challenge the assertion made here that inspection is only useful in 
scheduling contexts. There are users who simply want to know about their 
hardware, and read the results as posted to swift. Inspection also handles 
discovery of new nodes when given basic information about them.

-
Jay Faulkner
OSIC

> Regards
> Nisha
> 
> 
> On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt  wrote:
> On 10 April 2017 at 11:31,   wrote:
> > On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
> >> Hi team,
> >>
> >> Please could you pour in your suggestions on the mail?
> >>
> >> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
> >> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
> >> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
> >> pad.net/ironic/+bug/1681320 for the discussion topic.
> >
> > If I understand you correctly, you want to be able to filter ironic
> > hosts by available PCI device, correct? Barring any possibility that
> > resource providers could do this for you yet, extending the nova ironic
> > driver to use the PCI passthrough filter sounds like the way to go.
> 
> With ironic I thought everything is "passed through" by default,
> because there is no virtualization in the way. (I am possibly
> incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> devices dynamically.)
> 
> So I am assuming this is purely a scheduling concern. If so, why are
> the new custom resource classes not good enough? "ironic_blue" could
> mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> and one 1Gb nic, etc.
> 
> Or is there something else that needs addressing here? Trying to
> describe what you get with each flavor to end users? Are you needing
> to aggregating similar hardware in a different way to the above
> resource class approach?
> 
> Thanks,
> johnthetubaguy
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> The Secret Of Success is learning how to use pain and pleasure, instead
> of having pain and pleasure use you. If You do that you are in control
> of your life. If you don't life controls you.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Nisha Agarwal
Hi John,

>With ironic I thought everything is "passed through" by default,
>because there is no virtualization in the way. (I am possibly
>incorrectly assuming no BIOS tricks to turn off or re-assign PCI
>devices dynamically.)

Yes with ironic everything is passed through by default.

>So I am assuming this is purely a scheduling concern. If so, why are
>the new custom resource classes not good enough? "ironic_blue" could
>mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
>and one 1Gb nic, etc.
>Or is there something else that needs addressing here? Trying to
>describe what you get with each flavor to end users?
Yes this is purely from scheduling perspective.
Currently how ironic works is we discover server attributes and populate
them into node object. These attributes are then used for further
scheduling of the node from nova scheduler using ComputeCapabilities
filter. So this is something automated on ironic side, like we do
inspection of the node properties/attributes and user need to create the
flavor of their choice and the node which meets the user need is scheduled
for ironic deploy.
With resource class name in place in ironic, we ask user to do a manual
step i.e. create a resource class name based on the hardware attributes and
this need to be done on per node basis. For this user need to know the
server hardware properties in advance before assigning the resource class
name to the node(s) and then assign the resource class name manually to the
node.
In a broad way if i say, if we want to support scheduling based on quantity
for ironic nodes there is no way we can do it through current resource
class structure(actually just a tag) in ironic. A  user may want to
schedule ironic nodes on different resources and each resource should be a
different resource class (IMO).

>Are you needing to aggregating similar hardware in a different way to the
above
>resource class approach?
i guess no but the above resource class approach takes away the automation
on the ironic side and the whole purpose of inspection is defeated.

Regards
Nisha


On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt  wrote:

> On 10 April 2017 at 11:31,   wrote:
> > On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
> >> Hi team,
> >>
> >> Please could you pour in your suggestions on the mail?
> >>
> >> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
> >> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
> >> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
> >> pad.net/ironic/+bug/1681320 for the discussion topic.
> >
> > If I understand you correctly, you want to be able to filter ironic
> > hosts by available PCI device, correct? Barring any possibility that
> > resource providers could do this for you yet, extending the nova ironic
> > driver to use the PCI passthrough filter sounds like the way to go.
>
> With ironic I thought everything is "passed through" by default,
> because there is no virtualization in the way. (I am possibly
> incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> devices dynamically.)
>
> So I am assuming this is purely a scheduling concern. If so, why are
> the new custom resource classes not good enough? "ironic_blue" could
> mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> and one 1Gb nic, etc.
>
> Or is there something else that needs addressing here? Trying to
> describe what you get with each flavor to end users? Are you needing
> to aggregating similar hardware in a different way to the above
> resource class approach?
>
> Thanks,
> johnthetubaguy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-10 Thread John Garbutt
On 10 April 2017 at 11:31,   wrote:
> On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
>> Hi team,
>>
>> Please could you pour in your suggestions on the mail?
>>
>> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
>> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
>> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
>> pad.net/ironic/+bug/1681320 for the discussion topic.
>
> If I understand you correctly, you want to be able to filter ironic
> hosts by available PCI device, correct? Barring any possibility that
> resource providers could do this for you yet, extending the nova ironic
> driver to use the PCI passthrough filter sounds like the way to go.

With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)

So I am assuming this is purely a scheduling concern. If so, why are
the new custom resource classes not good enough? "ironic_blue" could
mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
and one 1Gb nic, etc.

Or is there something else that needs addressing here? Trying to
describe what you get with each flavor to end users? Are you needing
to aggregating similar hardware in a different way to the above
resource class approach?

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-10 Thread sfinucan
On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
> Hi team,
> 
> Please could you pour in your suggestions on the mail?
> 
> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
> pad.net/ironic/+bug/1681320 for the discussion topic.

If I understand you correctly, you want to be able to filter ironic
hosts by available PCI device, correct? Barring any possibility that
resource providers could do this for you yet, extending the nova ironic
driver to use the PCI passthrough filter sounds like the way to go.

Stephen

> Regards
> Nisha
> 
> On Wed, Apr 5, 2017 at 11:03 PM, Nisha Agarwal  l.com> wrote:
> > Hi,
> > 
> > I suggested to add few pci devices related attributes to ironic
> > inspection(out-of-band as well inband). https://review.openstack.or
> > g/#/c/338138.
> > 
> > I got the suggestion to convert them to standard pci device format
> > which nova understands. For example( as given in Nova code):
> > [{"count": 5, "vendor_id": "8086", "product_id": "1520",
> >  "extra_info":'{}'}],
> > 
> > For above to be supported for ironic scheduling, we will require
> > changes at two places: 
> > 1. nova - This should require a small code changes as
> > pci_passthrough filter already exists in nova. The ironic virt
> > driver should be able to consume the pci_device structure from
> > ironic node/database and pass it on to scheduler for scheduling and
> > it should add pci_passthrough filter in ironic nodes filter list.
> > 2. ironic - this definitely needs a spec which will suggest to add
> > the pci_device data structure to ironic node.
> > 
> > 
> > The ironic side work may take some time but Nova side looks to be a
> > small change. IMO we can have the nova side changes and ironic
> > database changes (to add pci_device field) parallely. 
> > Inspection can populate that new pci_device field for the node to
> > get scheduled.
> > 
> > AFAIK, ironic-inspector already has the plugin to discover pci
> > devices in the format nova has it today. If we get the scheduling
> > enabled based on pci_devices for ironic nodes, then inspector can
> > write this data to ironic node object by default.
> > 
> > What do you people think on this idea? Does it make sense? If its
> > worth to do this way i can own up this work.
> > 
> > Regards
> > Nisha
> > 
> > -- 
> > The Secret Of Success is learning how to use pain and pleasure,
> > instead
> > of having pain and pleasure use you. If You do that you are in
> > control
> > of your life. If you don't life controls you.
> > 
> 
> 
> 
> -- 
> The Secret Of Success is learning how to use pain and pleasure,
> instead
> of having pain and pleasure use you. If You do that you are in
> control
> of your life. If you don't life controls you.
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-10 Thread Nisha Agarwal
Hi team,

Please could you pour in your suggestions on the mail?

I raised a blueprint in Nova for this
https://blueprints.launchpad.net/nova/+spec/pci-passthorugh-for-ironic and
two RFEs at ironic side https://bugs.launchpad.net/ironic/+bug/1680780 and
https://bugs.launchpad.net/ironic/+bug/1681320 for the discussion topic.

Regards
Nisha

On Wed, Apr 5, 2017 at 11:03 PM, Nisha Agarwal 
wrote:

> Hi,
>
> I suggested to add few pci devices related attributes to ironic
> inspection(out-of-band as well inband). https://review.
> openstack.org/#/c/338138.
>
> I got the suggestion to convert them to standard pci device format which
> nova understands. For example( as given in Nova code):
> [{"count": 5, "vendor_id": "8086", "product_id": "1520",
>  "extra_info":'{}'}],
>
> For above to be supported for ironic scheduling, we will require changes
> at two places:
> 1. nova - This should require a small code changes as pci_passthrough
> filter already exists in nova. The ironic virt driver should be able to
> consume the pci_device structure from ironic node/database and pass it on
> to scheduler for scheduling and it should add pci_passthrough filter in
> ironic nodes filter list.
> 2. ironic - this definitely needs a spec which will suggest to add the
> pci_device data structure to ironic node.
>
>
> The ironic side work may take some time but Nova side looks to be a small
> change. IMO we can have the nova side changes and ironic database changes
> (to add pci_device field) parallely.
> Inspection can populate that new pci_device field for the node to get
> scheduled.
>
> AFAIK, ironic-inspector already has the plugin to discover pci devices in
> the format nova has it today. If we get the scheduling enabled based on
> pci_devices for ironic nodes, then inspector can write this data to ironic
> node object by default.
>
> What do you people think on this idea? Does it make sense? If its worth to
> do this way i can own up this work.
>
> Regards
> Nisha
>
> --
> The Secret Of Success is learning how to use pain and pleasure, instead
> of having pain and pleasure use you. If You do that you are in control
> of your life. If you don't life controls you.
>



-- 
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-05 Thread Nisha Agarwal
Hi,

I suggested to add few pci devices related attributes to ironic
inspection(out-of-band as well inband).
https://review.openstack.org/#/c/338138.

I got the suggestion to convert them to standard pci device format which
nova understands. For example( as given in Nova code):
[{"count": 5, "vendor_id": "8086", "product_id": "1520",
 "extra_info":'{}'}],

For above to be supported for ironic scheduling, we will require changes at
two places:
1. nova - This should require a small code changes as pci_passthrough
filter already exists in nova. The ironic virt driver should be able to
consume the pci_device structure from ironic node/database and pass it on
to scheduler for scheduling and it should add pci_passthrough filter in
ironic nodes filter list.
2. ironic - this definitely needs a spec which will suggest to add the
pci_device data structure to ironic node.


The ironic side work may take some time but Nova side looks to be a small
change. IMO we can have the nova side changes and ironic database changes
(to add pci_device field) parallely.
Inspection can populate that new pci_device field for the node to get
scheduled.

AFAIK, ironic-inspector already has the plugin to discover pci devices in
the format nova has it today. If we get the scheduling enabled based on
pci_devices for ironic nodes, then inspector can write this data to ironic
node object by default.

What do you people think on this idea? Does it make sense? If its worth to
do this way i can own up this work.

Regards
Nisha

-- 
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev