[openstack-dev] [Ironic] Stepping down from IPA core

2015-09-21 Thread Josh Gachnang
Hey y'all, it's with a heavy heart I have to announce I'll be stepping down
from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
healthcare startup (Triggr Health) and won't have the time to dedicate to
being an effective OpenStack reviewer.

Ever since the OnMetal team proposed IPA all the way back in the
Icehouse midcycle, this community has been welcoming, helpful, and all
around great. You've all helped me grow as a developer with your in depth
and patient reviews, for which I am eternally grateful. I'm really sad I
won't get to see everyone in Tokyo.

I'll still be on IRC after leaving, so feel free to ping me for any reason
:)

- JoshNang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] thoughts on the midcycle

2014-12-30 Thread Josh Gachnang
I could definitely make a Bay Area meetup.

On Mon Dec 29 2014 at 3:50:04 PM Jim Rollenhagen j...@jimrollenhagen.com
wrote:

 On Mon, Dec 29, 2014 at 10:45:57PM +, Devananda van der Veen wrote:
  I'm sending the details of the midcycle in a separate email. Before you
  reply that you won't be able to make it, I'd like to share some thoughts
 /
  concerns.
 
  In the last few weeks, several people who I previously thought would
 attend
  told me that they can't. By my informal count, it looks like we will have
  at most 5 of our 10 core reviewers in attendance. I don't think we should
  cancel based on that, but it does mean that we need to set our
 expectations
  accordingly.
 
  Assuming that we will be lacking about half the core team, I think it
 will
  be more practical as a focused sprint, rather than a planning  design
  meeting. While that's a break from precedent, planning should be
 happening
  via the spec review process *anyway*. Also, we already have a larger back
  log of specs and work than we had this time last cycle, but with the same
  size review team. Rather than adding to our backlog, I would like us to
 use
  this gathering to burn through some specs and land some code.
 
  That being said, I'd also like to put forth this idea: if we had a second
  gathering (with the same focus on writing code) the following week (let's
  say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be
 able
  to get the other half of the core team together and get more work done?
  Is this a good idea?
 

 I'm +1 on a Bay Area meetup; however, if it happens I likely won't be
 making the Grenoble meetup. There's a slim chance I can do both. I'd
 like to figure this out ASAP so I can book travel at a reasonable price.

 A second meetup certainly can't be bad; I'm sure we can get a ton of
 work done with the folks that I assume would attend. :)

 // jim

  OK. That's enough of my musing for now...
 
  Once again, if you will be attending the midcycle sprint in Grenoble the
  week of Feb 3rd, please sign up HERE
  https://www.eventbrite.com/e/openstack-ironic-kilo-midcycle
 -sprint-in-grenoble-tickets-15082886319
  .
 
  Regards,
  Devananda

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Cinder] Baremetal volumes -- how to model direct attached storage

2014-11-14 Thread Josh Gachnang
For decom (now zapping), I'm building it with config flags to either
disable it entirely, or just disable the erase_disks steps. No comment on
the daft bit :) But I do understand why you'd want to do it this way.

https://review.openstack.org/#/c/102685/

On Fri Nov 14 2014 at 6:14:13 AM Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Chris Jones's message of 2014-11-14 00:42:48 -0800:
  Hi
 
  My thoughts:
 
  Shoe-horning the ephemeral partition into Cinder seems like a lot of
 pain for almost no gain[1]. The only gain I can think of would be that we
 could bring a node down, boot it into a special ramdisk that exposes the
 volume to the network, so cindery operations (e.g. migration) could be
 performed, but I'm not even sure if anyone is asking for that?
 
  Forcing Cinder to understand and track something it can never normally
 do anything with, seems like we're just trying to squeeze ourselves into an
 ever-shrinking VM costume!
 
  Having said that, preserve ephemeral is a terrible oxymoron, so if we
 can do something about it, we probably should.
 
  How about instead, we teach Nova/Ironic about a concept of no
 ephemeral? They make a partition on the first disk for the first image
 they deploy, and then they never touch the other part(s) of the disk(s),
 until the instance is destroyed. This creates one additional burden for
 operators, which is to create and format a partition the first time they
 boot, but since this is a very small number of commands, and something we
 could trivially bake into our (root?) elements, I'm not sure it's a huge
 problem.
 
  This gets rid of the cognitive dissonance of preserving something that
 is described as ephemeral, and (IMO) makes it extremely clear that
 OpenStack isn't going to touch anything but the first partition of the
 first disk. If this were baked into the flavour rather than something we
 tack onto a nova rebuild command, it offers greater safety for operators,
 against the risk of accidentallying a vital state partition with a
 misconstructed rebuild command.
 

 +1

 A predictable and simple rule seems like it would go a long way to
 decoupling state preservation from rebuild, which I like very much.

 There is, of course, the issue of decom then, but that has never been a
 concern for TripleO, and for OnMetal, they think we're a bit daft trying
 to preserve state while delivering new images anyway. :)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Horizon] Ironic Horizon API

2014-09-02 Thread Josh Gachnang
Hey all,

I published a patch to add an Ironic API wrapper in Horizon. Having code up
for Horizon is a graduation requirement for Ironic, so I'd like some
eyeballs on it to at least tell us we're going in the right direction. I
understand this code won't land until after Ironic is integrated.

Another developer is working on the Horizon panels and other parts, and
will have them ASAP.

Review: https://review.openstack.org/#/c/117376/
---
Josh Gachnang
Tech Blog: ServerCobra.com, @ServerCobra
Github.com/PCsForEducation
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Horizon] Ironic Horizon API

2014-09-02 Thread Josh Gachnang
Right, the ASAP part is just to have a working Horizon for Ironic
(proposed, not merged) with most of the features we want as we move towards
the vote for graduation. We definitely understand the end-of-cycle crunch,
as we're dealing with the same in Ironic. I'm just looking for a general
This code looks reasonable or Woah, don't do it like that, not trying
to get it merged this late in the cycle.

As for graduation, as I understand, we need to have code proposed to
Horizon that we can work to merge after we graduate.

Thanks!

---
Josh Gachnang
Tech Blog: ServerCobra.com, @ServerCobra
Github.com/PCsForEducation


On Tue, Sep 2, 2014 at 9:37 PM, Jim Rollenhagen j...@jimrollenhagen.com
wrote:



 On September 2, 2014 9:28:15 PM PDT, Akihiro Motoki amot...@gmail.com
 wrote:
 Hi,
 
 Good to know we will have Ironic support. I can help the integration.
 
 Let me clarify the situation as Horizon core team. I wonder why it is
 ASAP.
 Horizon is released with integrated projects and it is true in Juno
 release
 too.
 Ironic is still incubated even if it is graduated for Kilo release.
 What is the requirement for graduation? More detail clarification is
 needed.
 All teams of the integrated projects are focusing on Juno releases and
 we all features will be reviewed after rc1 is shipped. The timing is a
 bit
 bad.

 Right, the Ironic team does not expect this to land in the Juno cycle. The
 graduation requirement is that Ironic has made a good faith effort to work
 toward a Horizon panel.

 We would like some eyes on the code to make sure we're moving the right
 direction, but again, we don't expect this to land until Kilo.

 // jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-08 Thread Josh Gachnang

 I'm more accustomed to using PDUs for this type of thing. I.e., a
 power strip you can ssh into or hit via a web API to toggle power to
 individual ports.
 Machines are configured to power up on power restore, plus PXE boot.
 You have less control than with IPMI -- all you can do is toggle power
 to the outlet -- but it works well, even for some desktop machines I
 have in a lab.
 I don't have a compelling need, but I've often wondered if such a
 driver would be useful. I can imagine it also being useful if people
 want to power up non-compute stuff, though that's probably not a top
 priority right now.


I believe someone was talking about this yesterday in the meeting. It would
be very possible to write an IPMI driver (possibly being renamed for this
reason) to control the power of a node via a PDU. You could then plug that
into the agent driver as the power driver to create something like
AgentAndPDUDriver. The current agent driver doesn't do anything with IPMI
except set boot device. The inability to set boot device would be the
biggest issue with a PDU driver as far as I can see, but that's not
insurmountable.

How much hardware information do we intend to store in Ironic? (Note
 that I'm genuinely asking this, not challenging your assertion.) It
 seems reasonable, but I think there's a lot of hardware information
 that could be useful (say, lspci output, per-processor flags, etc.),
 but stuffing it all in extra[] seems kind of messy.


Right now the hardware manager on the agent is pluggable, so what we're
storing is currently whatever you want!. I think in our current
iteration, it is just the MACs of the NICs. We haven't fully fleshed this
out yet.

---
Josh Gachnang
Tech Blog: ServerCobra.com, @ServerCobra
Github.com/PCsForEducation


On Tue, Apr 8, 2014 at 10:46 AM, Matt Wagner matt.wag...@redhat.com wrote:

 On 08/04/14 14:04 +0400, Vladimir Kozhukalov wrote:
 snip

  0) There are a plenty of old hardware which does not have IPMI/ILO at all.
 How Ironic is supposed to power them off and on? Ssh? But Ironic is not
 supposed to interact with host OS.


 I'm more accustomed to using PDUs for this type of thing. I.e., a
 power strip you can ssh into or hit via a web API to toggle power to
 individual ports.

 Machines are configured to power up on power restore, plus PXE boot.
 You have less control than with IPMI -- all you can do is toggle power
 to the outlet -- but it works well, even for some desktop machines I
 have in a lab.

 I don't have a compelling need, but I've often wondered if such a
 driver would be useful. I can imagine it also being useful if people
 want to power up non-compute stuff, though that's probably not a top
 priority right now.


  1) We agreed that Ironic is that place where we can store hardware info
 ('extra' field in node model). But many modern hardware configurations
 support hot pluggable hard drives, CPUs, and even memory. How Ironic will
 know that hardware configuration is changed? Does it need to know about
 hardware changes at all? Is it supposed that some monitoring agent (NOT
 ironic agent) will be used for that? But if we already have discovering
 extension in Ironic agent, then it sounds rational to use this extension
 for monitoring as well. Right?


 How much hardware information do we intend to store in Ironic? (Note
 that I'm genuinely asking this, not challenging your assertion.) It
 seems reasonable, but I think there's a lot of hardware information
 that could be useful (say, lspci output, per-processor flags, etc.),
 but stuffing it all in extra[] seems kind of messy.

 I don't have an overall answer for this question; I'm curious myself.

 -- Matt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev