Re: [openstack-dev] [Ironic] RAID interface - backing disk hints

2015-01-22 Thread Victor Lowther
On Thu, Jan 22, 2015 at 1:44 AM, Tim Bell tim.b...@cern.ch wrote:
 -Original Message-
 From: Victor Lowther [mailto:victor.lowt...@gmail.com]
 Sent: 21 January 2015 21:06
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ironic] RAID interface - backing disk hints

 On Tue, Jan 20, 2015 at 8:36 AM, Jim Rollenhagen j...@jimrollenhagen.com
 wrote:
  On Tue, Jan 20, 2015 at 07:28:46PM +0530, Ramakrishnan G wrote:
 ...

 Given that, deciding to build and manage arrays based on drive
 mfgr/model/firmware is a lot less useful than deciding to build and manage
 them based on interface type/media type/size/spindle speed/slot#.


 +1 - How about using the /dev/disk/by-path information which says to install 
 the system onto the disks by their device location.

 Have a look at how kickstart does it.  It's the same problem so we don't need 
 to re-invent the wheel.

I am aware of how kickstart detects disks and how we can pass
information about which disk to install on, and that is only
tangentially related to what I am talking about.

What I have been talking about is how to decide which physical disks
attached to a hardware RAID controller should be used to create a RAID
volume, and the relative usefulness of the properties of the physical
disks in doing so.  My contention is that drive mfgr/model/firmware is
not used in practice to make that decision -- other factors, such as
disk size, spindle speed, media type, and interface type are what are
used in a production setting.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] RAID interface - backing disk hints

2015-01-21 Thread Victor Lowther
On Tue, Jan 20, 2015 at 8:36 AM, Jim Rollenhagen j...@jimrollenhagen.com 
wrote:
 On Tue, Jan 20, 2015 at 07:28:46PM +0530, Ramakrishnan G wrote:
 Hi All,

 I would like to hear everyone's thoughts and probably reach a conclusion of
 whether be open to include more criteria or not.

 I think these filters make sense. An operator may want to say RAID all
 disks of this model; that's completely reasonable.

It is?  Do you know any operators who have actually done that in
production?  I have never heard of it. In my experience, operators
only care about hard drive mfgr/model/firmware rev for actuarial
reasons, not when building and rebuilding arrays.  When it comes to
creating arrays and rebuilding them, what matters more is media type,
interface type and speed, size, slot#, and spindle speed.  More to the
point, the exact make/model/firmware rev of disks in the system will
change over its lifetime as drives fail and get replaced -- the
default drive replacement policy at Dell (and, I would expect, most
server vendors) is that you get a compatible replacement with the same
or better specs, and getting a drive of the same model from the same
manufacturer with the same firmware rev is not guaranteed -- if you
ask and if any are in stock, it might happen, but when I did support
most people just did not care as long as the replacement was there
within 4 hours.

Given that, deciding to build and manage arrays based on drive
mfgr/model/firmware is a lot less useful than deciding to build and
manage them based on interface type/media type/size/spindle
speed/slot#.

 We've already decided we want to implement the same filters for deciding
 which disk to put the root on[0], and so we'll need to write this code
 for most/all drivers anyway. We can simply re-use this code for the RAID
 use case.

Not really -- there is no expectation that the operating system can
see the mfgr/make/firmware of the physical disks that make up a
virtual disk.  What you see instead from the OS side is made up by the
RAID controller (and if you are lucky it will be the same value as
what you see from whatever you are using to manage the RAID array, but
there is no expectation that it works that way), and assuming it
reflects the physical disks making up the array is just wrong.  To
make things even more interesting, you cannot even assume that the
interface you will use to create the virtual disk will return a unique
identifier for that virtual disk that corresponds to anything you will
see on the OS side -- that is an issue that we are having to work
around for the RAID interfaces that the DRAC exposes.  Sad to say, the
only real thing you can count on for picking the right raid volume for
a root device knowing what size it should be (or) always creating the
virtual disk for the root array first, choosing /dev/sda and hoping
your RAID controller exposes devices in the order in which they were
created.

If you are not using RAID then using mfgr/model/firmware/serial#
composed together to make a unique identifier makes sense.  If you are
using RAID it does not because there is no expectation that
information is exposed to the OS at all.

 // jim

 [0] 
 http://specs.openstack.org/openstack/ironic-specs/specs/kilo/root-device-hints.html


 Please pour in your thoughts on the thread

 Regards,
 Ramakrishnan (irc: rameshg87)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term discovery

2014-11-12 Thread Victor Lowther
Hmmm... with this thread in mind, anyone think that changing DISCOVERING to
INTROSPECTING in the new state machine spec is a good idea?

On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya sandhya.ganapa...@hp.com
 wrote:

 Hi all,

 Following the mail thread on disambiguating the term 'discovery' -

 In the lines of what Devananda had stated, Hardware Introspection also
 means retrieving and storing hardware details of the node whose credentials
 and IP Address are known to the system. (Correct me if I am wrong).

 I am currently in the process of extracting hardware details (cpu, memory
 etc..) of n no. of nodes belonging to a Chassis whose credentials are
 already known to ironic. Does this process fall in the category of hardware
 introspection?

 Thanks,
 Sandhya.

 -Original Message-
 From: Devananda van der Veen [mailto:devananda@gmail.com]
 Sent: Tuesday, October 21, 2014 5:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Ironic] disambiguating the term discovery

 Hi all,

 I was reminded in the Ironic meeting today that the words hardware
 discovery are overloaded and used in different ways by different people.
 Since this is something we are going to talk about at the summit (again),
 I'd like to start the discussion by building consensus in the language that
 we're going to use.

 So, I'm starting this thread to explain how I use those two words, and
 some other words that I use to mean something else which is what some
 people mean when they use those words. I'm not saying my words are the
 right words -- they're just the words that make sense to my brain right
 now. If someone else has better words, and those words also make sense (or
 make more sense) then I'm happy to use those instead.

 So, here are rough definitions for the terms I've been using for the last
 six months to disambiguate this:

 hardware discovery
 The process or act of identifying hitherto unknown hardware, which is
 addressable by the management system, in order to later make it available
 for provisioning and management.

 hardware introspection
 The process or act of gathering information about the properties or
 capabilities of hardware already known by the management system.


 Why is this disambiguation important? At the last midcycle, we agreed that
 hardware discovery is out of scope for Ironic -- finding new, unmanaged
 nodes and enrolling them with Ironic is best left to other services or
 processes, at least for the forseeable future.

 However, introspection is definitely within scope for Ironic. Even
 though we couldn't agree on the details during Juno, we are going to
 revisit this at the Kilo summit. This is an important feature for many of
 our current users, and multiple proof of concept implementations of this
 have been done by different parties over the last year.

 It may be entirely possible that no one else in our developer community is
 using the term introspection in the way that I've defined it above -- if
 so, that's fine, I can stop calling that introspection, but I don't know
 a better word for the thing that is find-unknown-hardware.

 Suggestions welcome,
 Devananda


 P.S.

 For what it's worth, googling for hardware discovery yields several
 results related to identifying unknown network-connected devices and adding
 them to inventory systems, which is the way that I'm using the term right
 now, so I don't feel completely off in continuing to say discovery when I
 mean find unknown network devices and add them to Ironic.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Building deploy ramdisks with dracut

2014-07-07 Thread Victor Lowther
As one of the original authors of dracut, I would love to see it being used
to build initramfs images for TripleO. dracut is flexible, works across a
wide variety of distros, and removes the need to have special-purpose
toolchains and packages for use by the initramfs.


On Thu, Jul 3, 2014 at 10:12 PM, Ben Nemec openst...@nemebean.com wrote:

 I've recently been looking into using dracut to build the
 deploy-ramdisks that we use for TripleO.  There are a few reasons for
 this: 1) dracut is a fairly standard way to generate a ramdisk, so users
 are more likely to know how to debug problems with it.  2) If we build
 with dracut, we get a lot of the udev/net/etc stuff that we're currently
 doing manually for free.  3) (aka the self-serving one ;-) RHEL 7
 doesn't include busybox, so we can't currently build ramdisks on that
 distribution using the existing ramdisk element.

 For the RHEL issue, this could just be an alternate way to build
 ramdisks, but given some of the other benefits I mentioned above I
 wonder if it would make sense to look at completely replacing the
 existing element.  From my investigation thus far, I think dracut can
 accommodate all of the functionality in the existing ramdisk element,
 and it looks to be available on all of our supported distros.

 So that's my pitch in favor of using dracut for ramdisks.  Any thoughts?
  Thanks.

 https://dracut.wiki.kernel.org/index.php/Main_Page

 -Ben

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev