Re: [openstack-dev] [Ironic][Neutron] - Integration with neutron using external attachment point

2014-05-20 Thread Russell Haering
We've been experimenting some with how to use Neutron with Ironic here at
Rackspace.

Our very experimental code:
https://github.com/rackerlabs/ironic-neutron-plugin

Our objective is the same as what you're describing, to allow Nova servers
backed by Ironic to attach to arbitrary Neutron networks. We're initially
targeting VLAN-based networks only, but eventually want to do VXLAN from
the top-of-rack switches, controlled via an SDN controller.

Our approach is a little different than what you're describing though. Our
objective is to modify the existing Nova - Neutron interaction as little
as possible, which means approaching the problem by thinking how would an
L2 agent do this?.

The workflow looks something like:

1. Nova calls Neutron to create a virtual port. Because this happens
_before_ Nova touches the virt driver, the port is at this point identical
to one created for a virtual server.
2. Nova executes the spawn method of the Ironic virt driver, which makes
some calls to Ironic.
3. Inside Ironic, we know about the physical switch ports that the selected
Node is connected to. This information is discovered early-on using LLDP
and stored in the Ironic database.
4. We actually need the node to remain on an internal provisioning VLAN for
most of the provisioning process, but once we're done with on-host work we
turn the server off.
5. Ironic deletes a Neutron port that was created at bootstrap time to
trunk the physical switch ports for provisioning.
6. Ironic updates each of the customer's Neutron ports with information
about its physical switch port.
6. Our Neutron extension configures the switches accordingly.
7. Then Ironic brings the server back up.

The destroy process basically does the reverse. Ironic removes the physical
switch mapping from the Neutron ports, re-creates an internal trunked port,
does some work to tear down the server, then passes control back to Nova.
At that point Nova can do what it wants with the Neutron ports.
Hypothetically that could include allocating them to a different Ironic
Node, etc, although in practice it just deletes them.

Again, this is all very experimental in nature, but it seems to work fairly
well for the use-cases we've considered. We'd love to find a way to
collaborate with others working on similar problems.

Thanks,
Russell


On Tue, May 20, 2014 at 7:17 AM, Akihiro Motoki amot...@gmail.com wrote:

 # Added [Neutron] tag as well.

 Hi Igor,

 Thanks for the comment. We already know them as I commented
 in the Summit session and ML2 weekly meeting.
 Kevin's blueprint now covers Ironic integration and layer2 network gateway
 and I believe campus-network blueprint will be covered.

 We think the work can be split into generic API definition and
 implementations
 (including ML2). In external attachment point blueprint review, API
 and generic topics are mainly discussed so far and the detail
 implementation is not discussed
 so much yet. ML2 implementation detail can be discussed later
 (separately or as a part of the blueprint review).

 I am not sure what changes proposed in Blueprint [1].
 AFAIK SDN/OpenFlow controller based approach can support this,
 but how can we archive this in the existing open source implementation.
 I am also interested in the ML2 implementation detail.

 Anyway more input will be appreciated.

 Thanks,
 Akihiro

 On Tue, May 20, 2014 at 7:13 PM, Igor Cardoso igordc...@gmail.com wrote:
  Hello Kevin.
  There is a similar Neutron blueprint [1], originally meant for Havana but
  now aiming for Juno.
  I would be happy to join efforts with you regarding our blueprints.
  See also: [2].
 
  [1] https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
  [2] https://blueprints.launchpad.net/neutron/+spec/campus-network
 
 
  On 19 May 2014 23:52, Kevin Benton blak...@gmail.com wrote:
 
  Hello,
 
  I am working on an extension for neutron to allow external attachment
  point information to be stored and used by backend plugins/drivers to
 place
  switch ports into neutron networks[1].
 
  One of the primary use cases is to integrate ironic with neutron. The
  basic workflow is that ironic will create the external attachment points
  when servers are initially installed. This step could either be
 automated
  (extract switch-ID and port number of LLDP message) or it could be
 manually
  performed by an admin who notes the ports a server is plugged into.
 
  Then when an instance is chosen for assignment and the neutron port
 needs
  to be created, the creation request would reference the corresponding
  attachment ID and neutron would configure the physical switch port to
 place
  the port on the appropriate neutron network.
 
  If this workflow won't work for Ironic, please respond to this email or
  leave comments on the blueprint review.
 
  1. https://review.openstack.org/#/c/87825/
 
 
  Thanks
  --
  Kevin Benton
 
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] [Ironic] Moving to a formal design process

2014-05-19 Thread Russell Haering
+1 to this process, and I think the template is pretty reasonable.

One thing to call out maybe, is are there any known failure scenarios? If
so, why shouldn't they block the spec?


On Mon, May 19, 2014 at 5:51 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 Added -

 https://github.com/devananda/ironic-specs/commit/7f34f353332ad5b26830dadc8c9f870df399feb7


 On Sun, May 18, 2014 at 6:13 PM, Robert Collins robe...@robertcollins.net
  wrote:

 I'd like to suggest two things.

 Firstly a section on scale (as opposed to performance).

 Secondly, I'd like to see additional hard requirements that will be
 added to drivers called out (e.g. a 'Driver Impact' section).

 -Rob

 On 19 May 2014 10:03, Devananda van der Veen devananda@gmail.com
 wrote:
  Hi all,
 
  As with several other projects, and as discussed at the summit, Ironic
 is
  moving to a formal / specs-based design process. The reasons for this
 have
  been well summarized in previous email threads in other projects [*],
 but in
  short, it's because, until now, nearly all our blueprints lacked a
 design
  specification which could be compared to the proposed code, resulting
 in the
  code-review also being a design-review. This week, I will be resetting
 the
  Definition status of all blueprints to New, and require everything
 to go
  through a specs review process -- yes, even the ones that were
 previously
  approved.
 
  I've proposed the creation of the openstack/ironic-specs repo here:
https://review.openstack.org/#/c/94113/
 
  And put up an initial version on github to start the process:
https://github.com/devananda/ironic-specs
 
 https://github.com/devananda/ironic-specs/blob/master/specs/template.rst
 
  I've begun sketching out specs proposals for some of the
 essential-for-juno
  items, which can be found in the above repo for now, and will be
 proposed
  for review once the openstack/ironic-specs repository is created.
 
  For what it's worth, I based this on the Nova specs repo, with some
  customization geared towards Ironic (eg, I removed notifications
 section
  and added some comments regarding hardware). At this point, I'd like
  feedback from other core reviewers and folks familiar with the specs
  process. Please think about the types of architectural changes you look
 for
  during code reviews and make sure the specs template addresses them, and
  that they apply to Ironic.
 
  I will focus on creating and landing the specs for items essential for
  graduation first and then prioritize review of additional feature specs
  based on the community feedback we received at the Juno Design Summit. I
  will send another email soliciting the community members and vendors to
  propose specs for the features they are working on once the initial
 work on
  the repo is complete (hopefully before end-of-week).
 
  Regards,
  Devananda
 
 
  [*] eg.:
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-April/032753.html
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Should we adopt a blueprint design process

2014-04-17 Thread Russell Haering
Completely agree.

We're spending too much time discussing features after they're implemented,
which makes contribution more difficult for everyone. Forcing an explicit
design+review process, using the same tools as we use for coding+review
seems like a great idea. If it doesn't work we can iterate.


On Thu, Apr 17, 2014 at 11:01 AM, Kyle Mestery mest...@noironetworks.comwrote:

 On Thu, Apr 17, 2014 at 12:11 PM, Devananda van der Veen
 devananda@gmail.com wrote:
  Hi all,
 
  The discussion of blueprint review has come up recently for several
 reasons,
  not the least of which is that I haven't yet reviewed many of the
 blueprints
  that have been filed recently.
 
  My biggest issue with launchpad blueprints is that they do not provide a
  usable interface for design iteration prior to writing code. Between the
  whiteboard section, wikis, and etherpads, we have muddled through a few
  designs (namely cinder and ceilometer integration) with accuracy, but the
  vast majority of BPs are basically reviewed after they're implemented.
 This
  seems to be a widespread objection to launchpad blueprints within the
  OpenStack community, which others are trying to solve. Having now looked
 at
  what Nova is doing with the nova-specs repo, and considering that
 TripleO is
  also moving to that format for blueprint submission, and considering
 that we
  have a very good review things in gerrit culture in the Ironic
 community
  already, I think it would be a very positive change.
 
  For reference, here is the Nova discussion thread:
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html
 
  and the specs repo BP template:
  https://github.com/openstack/nova-specs/blob/master/specs/template.rst
 
  So, I would like us to begin using this development process over the
 course
  of Juno. We have a lot of BPs up right now that are light on details,
 and,
  rather than iterate on each of them in launchpad, I would like to propose
  that:
  * we create an ironic-specs repo, based on Nova's format, before the
 summit
  * I will begin reviewing BPs leading up to the summit, focusing on
 features
  that were originally targeted to Icehouse and didn't make it, or are
  obviously achievable for J1
  * we'll probably discuss blueprints and milestones at the summit, and
 will
  probably adjust targets
  * after the summit, for any BP not targeted to J1, we require blueprint
  proposals to go through the spec review process before merging any
  associated code.
 
  Cores and interested parties, please reply to this thread with your
  opinions.
 
 I think this is a great idea Devananda. The Neutron community has
 moved to this model for Juno as well, and people have been very
 positive so far.

 Thanks,
 Kyle

  --
  Devananda
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Merging the agent driver

2014-04-16 Thread Russell Haering
All,

If anyone has a chance to do some review, we really want to get this one
merged: https://review.openstack.org/#/c/81919/

This will clear the way for us to work on getting the driver reviewed,
followed by a series of smaller patches.

Thanks,
Russell


On Tue, Apr 8, 2014 at 4:58 PM, Jim Rollenhagen j...@jimrollenhagen.comwrote:

 Hi all,

 As Deva requested, our team put up a merge request for the IPA driver (
 https://review.openstack.org/#/c/84795/) as soon as Juno opened. We’ve
 continued to update this patch and iterate on the agent model. We are doing
 our best not to get too far ahead of master, but we also want to get a
 working prototype done ASAP, specifically before Atlanta.

 This merge request needs reviews, but also depends on two other reviews:
 - https://review.openstack.org/#/c/81391/
 - https://review.openstack.org/#/c/81919/

 The reason I would like to get these patches moving quickly is because we
 have some other patches depending on them:
 - https://review.openstack.org/#/c/85228/
 - https://review.openstack.org/#/c/86173/
 - https://review.openstack.org/#/c/86141/

 And we’re developing more as we speak.

 So, that said - what can we do to help get these patches through faster?
 Whether that means our team reviewing Ironic things to help out Ironic
 cores, or answering questions about the patches that are up, we’re willing
 to do it to get these through faster.

 // jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nodeless Vendor Passthru API

2014-03-25 Thread Russell Haering
On Tue, Mar 25, 2014 at 6:56 AM, Lucas Alvares Gomes
lucasago...@gmail.comwrote:


 Hi Russell,

 Ironic allows drivers to expose a vendor passthru API on a Node. This
 basically serves two purposes:

 1. Allows drivers to expose functionality that hasn't yet been
 standardized in the Ironic API. For example, the Seamicro driver exposes
 attach_volume, set_boot_device and set_node_vlan_id passthru methods.
 2. Vendor passthru is also used by the PXE deploy driver as an internal
 RPC callback mechanism. The deploy ramdisk makes calls to the passthru API
 to signal for a deployment to continue once a server has booted.

 For the purposes of this discussion I want to focus on case #2. Case #1
 is certainly worth a separate discussion - we started this in
 #openstack-ironic on Friday.

 In the new agent we are working on, we want to be able to look up what
 node the agent is running on, and eventually to be able to register a new
 node automatically. We will perform an inventory of the server and submit
 that to Ironic, where it can be used to map the agent to an existing Node
 or to create a new one. Once the agent knows what node it is on, it will
 check in with a passthru API much like that used by the PXE driver - in
 some configurations this might trigger an immediate continue of an
 ongoing deploy, in others it might simply register the agent as available
 for new deploys in the future.


 Maybe another way to look up what node the agent is running on would be by
 looking at the MAC address of that node, having it on hand you could then
 GET /ports/detail and find which port has that MAC associated with it,
 after you find the port you can look at the node_uuid field which holds the
 UUID of the node which that port belongs to (All ports have a node_uuid,
 it's mandatory). So, right now you would need to list all the ports and
 find that MAC address there, but I got a review up that might help you with
 it by allowing you to get a port using its address as input:
 https://review.openstack.org/#/c/82773/ (GET /ports/detail?address=mac).

 What you think?


Right, we discussed this possibility as well. Internally that's actually
how the first iteration of our lookup call will work (mapping MAC addresses
to ports to nodes).

This can definitely be made to work, but in my mind it has a few
limitations:

1. It limits how we can do lookups. In the future I'd like to be able to
consider serial numbers, hardware profiles, etc when trying to map an agent
to a node. Needing to expose an API for each of these is going to be
infeasible.
2. It limits how we do enrollment.
3. It forces us to expose more of the API to agents.

Items 1 and 2 seem like things that someone deploying Ironic is especially
likely to want to customize. For example, I might have some business
process around replacing NICs where I want to customize how a server is
uniquely identified (or even hit an external source to do it), or I might
want want to override enrollment to hook into Fuel.

While we can probably find a way to solve our current problem, how can we
generically solve the need for an agent to talk to a driver (either in
order to centralize orchestration, or because access to the database is
needed) without needing a Node UUID?

Thanks,
Russell
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Nodeless Vendor Passthru API

2014-03-24 Thread Russell Haering
All,

Ironic allows drivers to expose a vendor passthru API on a Node. This
basically serves two purposes:

1. Allows drivers to expose functionality that hasn't yet been standardized
in the Ironic API. For example, the Seamicro driver exposes
attach_volume, set_boot_device and set_node_vlan_id passthru methods.
2. Vendor passthru is also used by the PXE deploy driver as an internal RPC
callback mechanism. The deploy ramdisk makes calls to the passthru API to
signal for a deployment to continue once a server has booted.

For the purposes of this discussion I want to focus on case #2. Case #1 is
certainly worth a separate discussion - we started this in
#openstack-ironic on Friday.

In the new agent we are working on, we want to be able to look up what node
the agent is running on, and eventually to be able to register a new node
automatically. We will perform an inventory of the server and submit that
to Ironic, where it can be used to map the agent to an existing Node or to
create a new one. Once the agent knows what node it is on, it will check in
with a passthru API much like that used by the PXE driver - in some
configurations this might trigger an immediate continue of an ongoing
deploy, in others it might simply register the agent as available for new
deploys in the future.

The point here is that we need a way for the agent driver to expose a
top-level lookup API, which doesn't require a Node UUID in the URL.

I've got a review (https://review.openstack.org/#/c/81919/) up which
explores one possible implementation of this. It basically routes POSTs to
/drivers/driver_name/vendor_passthru/method_name to a new method on the
vendor interface.

Importantly, I don't believe that this is a useful way for vendors to
implement new consumer-facing functionality. If we decide to take this
approach, we should reject drivers try to do so. It is intended *only* for
internal communications with deploy agents.

Another possibility is that we could create a new API service intended
explicitly to serve use case #2 described above, which doesn't include most
of the existing public paths. In our environment I expect us to allow
agents whitelisted access to only two specific paths (lookup and checkin),
but this might be a better way to achieve that.

Thoughts?

Thanks,
Russell
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-07 Thread Russell Haering
Vladmir,

Hey, I'm on the team working on this agent, let me offer a little history.
We were working on a system of our own for managing bare metal gear which
we were calling Teeth. The project was mostly composed of:

1. teeth-agent: an on-host provisioning agent
2. teeth-overlord: a centralized automation mechanism

Plus a few other libraries (including teeth-rest, which contains some
common code we factored out of the agent/overlord).

A few weeks back we decided to shift our focus to using Ironic. At this
point we have effectively abandoned teeth-overlord, and are instead
focusing on upstream Ironic development, continued agent development and
building an Ironic driver capable of talking to our agent.

Over the last few days we've been removing non-OS-approved dependencies
from our agent: I think teeth-rest (and werkzeug, which it depends on) will
be the last to go when we replace it with Pecan+WSME sometime in the next
few days.

Thanks,
Russell


On Fri, Mar 7, 2014 at 8:26 AM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 As far as I understand, there are 4 projects which are connected with this
 topic. Another two projects which were not mentioned by Devananda are
 https://github.com/rackerlabs/teeth-rest
 https://github.com/rackerlabs/teeth-overlord

 Vladimir Kozhukalov


 On Fri, Mar 7, 2014 at 4:41 AM, Devananda van der Veen 
 devananda@gmail.com wrote:

 All,

 The Ironic team has been discussing the need for a deploy agent since
 well before the last summit -- we even laid out a few blueprints along
 those lines. That work was deferred  and we have been using the same deploy
 ramdisk that nova-baremetal used, and we will continue to use that ramdisk
 for the PXE driver in the Icehouse release.

 That being the case, at the sprint this week, a team from Rackspace
 shared work they have been doing to create a more featureful hardware agent
 and an Ironic driver which utilizes that agent. Early drafts of that work
 can be found here:

 https://github.com/rackerlabs/teeth-agent
 https://github.com/rackerlabs/ironic-teeth-driver

 I've updated the original blueprint and assigned it to Josh. For
 reference:

 https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk

 I believe this agent falls within the scope of the baremetal provisioning
 program, and welcome their contributions and collaboration on this. To that
 effect, I have suggested that the code be moved to a new OpenStack project
 named openstack/ironic-python-agent. This would follow an independent
 release cycle, and reuse some components of tripleo (os-*-config). To keep
 the collaborative momentup up, I would like this work to be done now (after
 all, it's not part of the Ironic repo or release). The new driver which
 will interface with that agent will need to stay on github -- or in a
 gerrit feature branch -- until Juno opens, at which point it should be
 proposed to Ironic.

 The agent architecture we discussed is roughly:
 - a pluggable JSON transport layer by which the Ironic driver will pass
 information to the ramdisk. Their initial implementation is a REST API.
 - a collection of hardware-specific utilities (python modules, bash
 scripts, what ever) which take JSON as input and perform specific actions
 (whether gathering data about the hardware or applying changes to it).
 - and an agent which routes the incoming JSON to the appropriate utility,
 and routes the response back via the transport layer.


 -Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-07 Thread Russell Haering
Thanks for putting that info together!

I'm not sure exactly what order things need to happen in, but Jay (JayF) is
working on the infra bits of getting a repository and CI, and Jim (jroll)
is getting the Pecan+WSME part done. Hopefully we'll have it all ready by
Monday.

On Fri, Mar 7, 2014 at 12:53 PM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 Russell,

 Great to hear you are going to move towards Pecan+WSME. Yesterday I had a
 look at teeth projects. Next few days I am going to start contributing.
 First of all, I think, we need to arrange all that stuff about pluggable
 architecture. I've created a wiki page about Ironic python agent
 https://wiki.openstack.org/wiki/Ironic-python-agent.

 And the question about contributing. Have you managed to send pull request
 to openstack-infra in order to move this project into
 github.com/stackforge? Or we are supposed to arrange everything (werkzeug
 - Pecan/WSME, architectural questions) before we move this agent to
 stackforge?





 Vladimir Kozhukalov


 On Fri, Mar 7, 2014 at 8:53 PM, Russell Haering 
 russellhaer...@gmail.comwrote:

 Vladmir,

 Hey, I'm on the team working on this agent, let me offer a little
 history. We were working on a system of our own for managing bare metal
 gear which we were calling Teeth. The project was mostly composed of:

 1. teeth-agent: an on-host provisioning agent
 2. teeth-overlord: a centralized automation mechanism

 Plus a few other libraries (including teeth-rest, which contains some
 common code we factored out of the agent/overlord).

 A few weeks back we decided to shift our focus to using Ironic. At this
 point we have effectively abandoned teeth-overlord, and are instead
 focusing on upstream Ironic development, continued agent development and
 building an Ironic driver capable of talking to our agent.

 Over the last few days we've been removing non-OS-approved dependencies
 from our agent: I think teeth-rest (and werkzeug, which it depends on) will
 be the last to go when we replace it with Pecan+WSME sometime in the next
 few days.

 Thanks,
 Russell


 On Fri, Mar 7, 2014 at 8:26 AM, Vladimir Kozhukalov 
 vkozhuka...@mirantis.com wrote:

 As far as I understand, there are 4 projects which are connected with
 this topic. Another two projects which were not mentioned by Devananda are
 https://github.com/rackerlabs/teeth-rest
 https://github.com/rackerlabs/teeth-overlord

 Vladimir Kozhukalov


 On Fri, Mar 7, 2014 at 4:41 AM, Devananda van der Veen 
 devananda@gmail.com wrote:

 All,

 The Ironic team has been discussing the need for a deploy agent since
 well before the last summit -- we even laid out a few blueprints along
 those lines. That work was deferred  and we have been using the same deploy
 ramdisk that nova-baremetal used, and we will continue to use that ramdisk
 for the PXE driver in the Icehouse release.

 That being the case, at the sprint this week, a team from Rackspace
 shared work they have been doing to create a more featureful hardware agent
 and an Ironic driver which utilizes that agent. Early drafts of that work
 can be found here:

 https://github.com/rackerlabs/teeth-agent
 https://github.com/rackerlabs/ironic-teeth-driver

 I've updated the original blueprint and assigned it to Josh. For
 reference:

 https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk

 I believe this agent falls within the scope of the baremetal
 provisioning program, and welcome their contributions and collaboration on
 this. To that effect, I have suggested that the code be moved to a new
 OpenStack project named openstack/ironic-python-agent. This would follow
 an independent release cycle, and reuse some components of tripleo
 (os-*-config). To keep the collaborative momentup up, I would like this
 work to be done now (after all, it's not part of the Ironic repo or
 release). The new driver which will interface with that agent will need to
 stay on github -- or in a gerrit feature branch -- until Juno opens, at
 which point it should be proposed to Ironic.

 The agent architecture we discussed is roughly:
 - a pluggable JSON transport layer by which the Ironic driver will pass
 information to the ramdisk. Their initial implementation is a REST API.
 - a collection of hardware-specific utilities (python modules, bash
 scripts, what ever) which take JSON as input and perform specific actions
 (whether gathering data about the hardware or applying changes to it).
 - and an agent which routes the incoming JSON to the appropriate
 utility, and routes the response back via the transport layer.


 -Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev