Re: [openstack-dev] [Ironic] Proposing new meeting times

2014-11-18 Thread Dmitry Tantsur

On 11/18/2014 02:00 AM, Devananda van der Veen wrote:

Hi all,

As discussed in Paris and at today's IRC meeting [1] we are going to be
alternating the time of the weekly IRC meetings to accommodate our
contributors in EMEA better. No time will be perfect for everyone, but
as it stands, we rarely (if ever) see our Indian, Chinese, and Japanese
contributors -- and it's quite hard for any of the AU / NZ folks to attend.

I'm proposing two sets of times below. Please respond with a "-1" vote
to an option if that option would cause you to miss ALL meetings, or a
"+1" vote if you can magically attend ALL the meetings. If you can
attend, without significant disruption, at least one of the time slots
in a proposal, please do not vote either for or against it. This way we
can identify a proposal which allows everyone to attend at a minimum 50%
of the meetings, and preferentially weight towards one that allows more
contributors to attend two meetings.

This link shows the local times in some major coutries / timezones
around the world (and you can customize it to add your own).
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20141125&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

For reference, the current meeting time is 1900 UTC.

Option #1: alternate between Monday 1900 UTC && Tuesday 0900 UTC.  I
like this because 1900 UTC spans all of US and western EU, while 0900
combines EU and EMEA. Folks in western EU are "in the middle" and can
attend all meetings.

+1



http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=19&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=25&hour=9&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


Option #2: alternate between Monday 1700 UTC && Tuesday 0500 UTC. I like
this because it shifts the current slot two hours earlier, making it
easier for eastern EU to attend without excluding the western US, and
while 0500 UTC is not so late that US west coast contributors can't
attend (it's 9PM for us), it is harder for western EU folks to attend.
There's really no one in the middle here, but there is at least a chance
for US west coast and EMEA to overlap, which we don't have at any other
time.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=17&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


I'll collate all the responses to this thread during the week, ahead of
next week's regularly-scheduled meeting.

-Devananda

[1]
http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-17-19.00.log.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 01:15 PM, Lucas Alvares Gomes wrote:

This was discussed in the Contributor Meetup on Friday at the Summit
but I think it's important to share on the mail list too so we can get
more opnions/suggestions/comments about it.

In the Ironic weekly meeting we dedicate a good time of the meeting to
do some announcements, reporting bug status, CI status, oslo status,
specific drivers status, etc... It's all good information, but I
believe that the mail list would be a better place to report it and
then we can free some time from our meeting to actually discuss
things.

Are you guys in favor of it?

If so I'd like to propose a new format based on the discussions we had
in Paris. For the people doing the status report on the meeting, they
would start adding the status to an etherpad and then we would have a
responsible person to get this information and send it to the mail
list once a week.

For the meeting itself we have a wiki page with an agenda[1] which
everyone can edit to put the topic they want to discuss in the meeting
there, I think that's fine and works. The only change about it would
be that we may want freeze the agenda 2 days before the meeting so
people can take a look at the topics that will be discussed and
prepare for it; With that we can move forward quicker with the
discussions because people will be familiar with the topics already.

Let me know what you guys think.
I'm not really fond of it (like every process complication) but it looks 
inevitable, so +1.




[1] https://wiki.openstack.org/wiki/Meetings/Ironic

Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 01:54 PM, Doug Hellmann wrote:


On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur  wrote:


On 11/12/2014 08:06 PM, Doug Hellmann wrote:

During our “Graduation Schedule” summit session we worked through the list of 
modules remaining the in the incubator. Our notes are in the etherpad [1], but as 
part of the "Write it Down” theme for Oslo this cycle I am also posting a 
summary of the outcome here on the mailing list for wider distribution. Let me know 
if you remembered the outcome for any of these modules differently than what I have 
written below.

Doug



Deleted or deprecated modules:

funcutils.py - This was present only for python 2.6 support, but it is no 
longer used in the applications. We are keeping it in the stable/juno branch of 
the incubator, and removing it from master (https://review.openstack.org/130092)

hooks.py - This is not being used anywhere, so we are removing it. 
(https://review.openstack.org/#/c/125781/)

quota.py - A new quota management system is being created 
(https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
replace this, so we will keep it in the incubator for now but deprecate it.

crypto/utils.py - We agreed to mark this as deprecated and encourage the use of 
Barbican or cryptography.py (https://review.openstack.org/134020)

cache/ - Morgan is going to be working on a new oslo.cache library as a 
front-end for dogpile, so this is also deprecated 
(https://review.openstack.org/134021)

apiclient/ - With the SDK project picking up steam, we felt it was safe to 
deprecate this code as well (https://review.openstack.org/134024).

xmlutils.py - This module was used to provide a security fix for some XML 
modules that have since been updated directly. It was removed. 
(https://review.openstack.org/#/c/125021/)



Graduating:

oslo.context:
- Dims is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
- includes:
context.py

oslo.service:
- Sachi is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
- includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py

By te way, right now I'm looking into updating this code to be able to run 
tasks on a thread pool, not only in one thread (quite a problem for Ironic). 
Does it somehow interfere with the graduation? Any deadlines or something?


Feature development on code declared ready for graduation is basically frozen 
until the new library is created. You should plan on doing that work in the new 
oslo.service repository, which should be showing up soon. And the you describe 
feature sounds like something for which we would want a spec written, so please 
consider filing one when you have some of the details worked out.
Sure, right now I'm experimenting in Ironic tree to figure out how it 
really works. There's a single oslo-specs repo for the whole oslo, right?







request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

oslo.utils:
- We need to look into how to preserve the git history as we import these 
modules.
- includes:
fileutils.py
versionutils.py



Remaining untouched:

scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
whether Gantt has enough traction yet so we will hold onto these in the 
incubator for at least another cycle.

report/ - There’s interest in creating an oslo.reports library containing this 
code, but we haven’t had time to coordinate with Solly about doing that.



Other work:

We will continue the work on oslo.concurrency and oslo.log that we started 
during Juno.

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 12:27 PM, Ganapathy, Sandhya wrote:

Hi All,

Based on the discussions, I have filed a blue print that initiates discovery of 
node hardware details given its credentials at chassis level. I am in the 
process of creating a spec for it. Do share your thoughts regarding this -

https://blueprints.launchpad.net/ironic/+spec/chassis-level-node-discovery
Hi and thank you for the suggestion. As already said, this thread is not 
the best place to discuss it, so please file a (short version of) spec, 
so that we can comment on it.


Thanks,
Sandhya.

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, November 13, 2014 2:20 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

On 11/12/2014 10:47 PM, Victor Lowther wrote:

Hmmm... with this thread in mind, anyone think that changing
DISCOVERING to INTROSPECTING in the new state machine spec is a good idea?

As before I'm uncertain. Discovery is a troublesome term, but too many people 
use and recognize it, while IMO introspecting is much less common. So count me 
as -0 on this.



On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
mailto:sandhya.ganapa...@hp.com>> wrote:

 Hi all,

 Following the mail thread on disambiguating the term 'discovery' -

 In the lines of what Devananda had stated, Hardware Introspection
 also means retrieving and storing hardware details of the node whose
 credentials and IP Address are known to the system. (Correct me if I
 am wrong).

 I am currently in the process of extracting hardware details (cpu,
 memory etc..) of n no. of nodes belonging to a Chassis whose
 credentials are already known to ironic. Does this process fall in
 the category of hardware introspection?

 Thanks,
 Sandhya.

 -Original Message-
 From: Devananda van der Veen [mailto:devananda@gmail.com
 <mailto:devananda@gmail.com>]
 Sent: Tuesday, October 21, 2014 5:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Ironic] disambiguating the term "discovery"

 Hi all,

 I was reminded in the Ironic meeting today that the words "hardware
 discovery" are overloaded and used in different ways by different
 people. Since this is something we are going to talk about at the
 summit (again), I'd like to start the discussion by building
 consensus in the language that we're going to use.

 So, I'm starting this thread to explain how I use those two words,
 and some other words that I use to mean something else which is what
 some people mean when they use those words. I'm not saying my words
 are the right words -- they're just the words that make sense to my
 brain right now. If someone else has better words, and those words
 also make sense (or make more sense) then I'm happy to use those
 instead.

 So, here are rough definitions for the terms I've been using for the
 last six months to disambiguate this:

 "hardware discovery"
 The process or act of identifying hitherto unknown hardware, which
 is addressable by the management system, in order to later make it
 available for provisioning and management.

 "hardware introspection"
 The process or act of gathering information about the properties or
 capabilities of hardware already known by the management system.


 Why is this disambiguation important? At the last midcycle, we
 agreed that "hardware discovery" is out of scope for Ironic --
 finding new, unmanaged nodes and enrolling them with Ironic is best
 left to other services or processes, at least for the forseeable future.

 However, "introspection" is definitely within scope for Ironic. Even
 though we couldn't agree on the details during Juno, we are going to
 revisit this at the Kilo summit. This is an important feature for
 many of our current users, and multiple proof of concept
 implementations of this have been done by different parties over the
 last year.

 It may be entirely possible that no one else in our developer
 community is using the term "introspection" in the way that I've
 defined it above -- if so, that's fine, I can stop calling that
 "introspection", but I don't know a better word for the thing that
 is find-unknown-hardware.

 Suggestions welcome,
 Devananda


 P.S.

 For what it's worth, googling for "hardware discovery" yields
 several results related to identifying unknown network-connected
 devices and adding them to inventory systems, which is the way that
 I'm using the term right now, so I don't feel completely off in
 continuin

Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Dmitry Tantsur

On 11/12/2014 08:06 PM, Doug Hellmann wrote:

During our “Graduation Schedule” summit session we worked through the list of 
modules remaining the in the incubator. Our notes are in the etherpad [1], but as 
part of the "Write it Down” theme for Oslo this cycle I am also posting a 
summary of the outcome here on the mailing list for wider distribution. Let me know 
if you remembered the outcome for any of these modules differently than what I have 
written below.

Doug



Deleted or deprecated modules:

funcutils.py - This was present only for python 2.6 support, but it is no 
longer used in the applications. We are keeping it in the stable/juno branch of 
the incubator, and removing it from master (https://review.openstack.org/130092)

hooks.py - This is not being used anywhere, so we are removing it. 
(https://review.openstack.org/#/c/125781/)

quota.py - A new quota management system is being created 
(https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
replace this, so we will keep it in the incubator for now but deprecate it.

crypto/utils.py - We agreed to mark this as deprecated and encourage the use of 
Barbican or cryptography.py (https://review.openstack.org/134020)

cache/ - Morgan is going to be working on a new oslo.cache library as a 
front-end for dogpile, so this is also deprecated 
(https://review.openstack.org/134021)

apiclient/ - With the SDK project picking up steam, we felt it was safe to 
deprecate this code as well (https://review.openstack.org/134024).

xmlutils.py - This module was used to provide a security fix for some XML 
modules that have since been updated directly. It was removed. 
(https://review.openstack.org/#/c/125021/)



Graduating:

oslo.context:
- Dims is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
- includes:
context.py

oslo.service:
- Sachi is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
- includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py
By te way, right now I'm looking into updating this code to be able to 
run tasks on a thread pool, not only in one thread (quite a problem for 
Ironic). Does it somehow interfere with the graduation? Any deadlines or 
something?



request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

oslo.utils:
- We need to look into how to preserve the git history as we import these 
modules.
- includes:
fileutils.py
versionutils.py



Remaining untouched:

scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
whether Gantt has enough traction yet so we will hold onto these in the 
incubator for at least another cycle.

report/ - There’s interest in creating an oslo.reports library containing this 
code, but we haven’t had time to coordinate with Solly about doing that.



Other work:

We will continue the work on oslo.concurrency and oslo.log that we started 
during Juno.

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-11-13 Thread Dmitry Tantsur

On 11/12/2014 10:47 PM, Victor Lowther wrote:

Hmmm... with this thread in mind, anyone think that changing DISCOVERING
to INTROSPECTING in the new state machine spec is a good idea?
As before I'm uncertain. Discovery is a troublesome term, but too many 
people use and recognize it, while IMO introspecting is much less 
common. So count me as -0 on this.




On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
mailto:sandhya.ganapa...@hp.com>> wrote:

Hi all,

Following the mail thread on disambiguating the term 'discovery' -

In the lines of what Devananda had stated, Hardware Introspection
also means retrieving and storing hardware details of the node whose
credentials and IP Address are known to the system. (Correct me if I
am wrong).

I am currently in the process of extracting hardware details (cpu,
memory etc..) of n no. of nodes belonging to a Chassis whose
credentials are already known to ironic. Does this process fall in
the category of hardware introspection?

Thanks,
Sandhya.

-Original Message-
From: Devananda van der Veen [mailto:devananda@gmail.com
]
Sent: Tuesday, October 21, 2014 5:41 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] disambiguating the term "discovery"

Hi all,

I was reminded in the Ironic meeting today that the words "hardware
discovery" are overloaded and used in different ways by different
people. Since this is something we are going to talk about at the
summit (again), I'd like to start the discussion by building
consensus in the language that we're going to use.

So, I'm starting this thread to explain how I use those two words,
and some other words that I use to mean something else which is what
some people mean when they use those words. I'm not saying my words
are the right words -- they're just the words that make sense to my
brain right now. If someone else has better words, and those words
also make sense (or make more sense) then I'm happy to use those
instead.

So, here are rough definitions for the terms I've been using for the
last six months to disambiguate this:

"hardware discovery"
The process or act of identifying hitherto unknown hardware, which
is addressable by the management system, in order to later make it
available for provisioning and management.

"hardware introspection"
The process or act of gathering information about the properties or
capabilities of hardware already known by the management system.


Why is this disambiguation important? At the last midcycle, we
agreed that "hardware discovery" is out of scope for Ironic --
finding new, unmanaged nodes and enrolling them with Ironic is best
left to other services or processes, at least for the forseeable future.

However, "introspection" is definitely within scope for Ironic. Even
though we couldn't agree on the details during Juno, we are going to
revisit this at the Kilo summit. This is an important feature for
many of our current users, and multiple proof of concept
implementations of this have been done by different parties over the
last year.

It may be entirely possible that no one else in our developer
community is using the term "introspection" in the way that I've
defined it above -- if so, that's fine, I can stop calling that
"introspection", but I don't know a better word for the thing that
is find-unknown-hardware.

Suggestions welcome,
Devananda


P.S.

For what it's worth, googling for "hardware discovery" yields
several results related to identifying unknown network-connected
devices and adding them to inventory systems, which is the way that
I'm using the term right now, so I don't feel completely off in
continuing to say "discovery" when I mean "find unknown network
devices and add them to Ironic".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-10-21 Thread Dmitry Tantsur

On 10/21/2014 02:11 AM, Devananda van der Veen wrote:

Hi all,

I was reminded in the Ironic meeting today that the words "hardware
discovery" are overloaded and used in different ways by different
people. Since this is something we are going to talk about at the
summit (again), I'd like to start the discussion by building consensus
in the language that we're going to use.

So, I'm starting this thread to explain how I use those two words, and
some other words that I use to mean something else which is what some
people mean when they use those words. I'm not saying my words are the
right words -- they're just the words that make sense to my brain
right now. If someone else has better words, and those words also make
sense (or make more sense) then I'm happy to use those instead.

So, here are rough definitions for the terms I've been using for the
last six months to disambiguate this:

"hardware discovery"
The process or act of identifying hitherto unknown hardware, which is
addressable by the management system, in order to later make it
available for provisioning and management.

"hardware introspection"
The process or act of gathering information about the properties or
capabilities of hardware already known by the management system.
I generally agree with this separation, though it brings some troubles 
to me, as I'm used to calling "discovery" what you called 
"introspection" (it was not the case this summer, but now I changed my 
mind). And the term "discovery" is baked into the.. hmm.. introspection 
service that I've written [1].


So I would personally prefer to leave "discovery" as in "discovery of 
hardware properties", though I realize that "introspection" may be a 
better name.


[1] https://github.com/Divius/ironic-discoverd



Why is this disambiguation important? At the last midcycle, we agreed
that "hardware discovery" is out of scope for Ironic -- finding new,
unmanaged nodes and enrolling them with Ironic is best left to other
services or processes, at least for the forseeable future.

However, "introspection" is definitely within scope for Ironic. Even
though we couldn't agree on the details during Juno, we are going to
revisit this at the Kilo summit. This is an important feature for many
of our current users, and multiple proof of concept implementations of
this have been done by different parties over the last year.

It may be entirely possible that no one else in our developer
community is using the term "introspection" in the way that I've
defined it above -- if so, that's fine, I can stop calling that
"introspection", but I don't know a better word for the thing that is
find-unknown-hardware.

Suggestions welcome,
Devananda


P.S.

For what it's worth, googling for "hardware discovery" yields several
results related to identifying unknown network-connected devices and
adding them to inventory systems, which is the way that I'm using the
term right now, so I don't feel completely off in continuing to say
"discovery" when I mean "find unknown network devices and add them to
Ironic".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-17 Thread Dmitry Tantsur

Hi Jim,

On 10/16/2014 07:23 PM, Jim Mankovich wrote:

All,

I would like to get some feedback on a proposal  to change to the
current sensor naming implemented in ironic and ceilometer.

I would like to provide vendor specific sensors within the current
structure for IPMI sensors in ironic and ceilometer, but I have found
that the current  implementation of sensor meters in ironic and
ceilometer is IPMI specific (from a meter naming perspective) . This is
not suitable as it currently stands to support sensor information from a
provider other than IPMI.Also, the current Resource ID naming makes
it difficult for a consumer of sensors to quickly find all the sensors
for a given Ironic Node ID, so I would like to propose changing the
Resource ID naming as well.

Currently, sensors sent by ironic to ceilometer get named by ceilometer
as has "hardware.ipmi.SensorType", and the Resource ID is the Ironic
Node ID with a post-fix containing the Sensor ID.  For Details
pertaining to the issue with the Resource ID naming, see
https://bugs.launchpad.net/ironic/+bug/1377157, "ipmi sensor naming in
ceilometer is not consumer friendly"

Here is an example of what meters look like for sensors in ceilometer
with the current implementation:
| Name| Type  | Unit | Resource ID
| hardware.ipmi.current   | gauge | W|
edafe6f4-5996-4df8-bc84-7d92439e15c0-power_meter_(0x16)
| hardware.ipmi.temperature   | gauge | C|
edafe6f4-5996-4df8-bc84-7d92439e15c0-16-system_board_(0x15)

What I would like to propose is dropping the ipmi string from the name
altogether and appending the Sensor ID to the name  instead of to the
Resource ID.   So, transforming the above to the new naming would result
in the following:
| Name | Type  | Unit | Resource ID
| hardware.current.power_meter_(0x16)  | gauge | W|
edafe6f4-5996-4df8-bc84-7d92439e15c0
| hardware.temperature.system_board_(0x15) | gauge | C|
edafe6f4-5996-4df8-bc84-7d92439e15c0

+1

Very-very nit, feel free to ignore if inappropriate: maybe 
hardware.temperature.system_board.0x15 ? I.e. use separation with dots, 
do not use brackets?


This structure would provide the ability for a consumer to do a
ceilometer resource list using the Ironic Node ID as the Resource ID to
get all the sensors in a given platform.   The consumer would then then
iterate over each of the sensors to get the samples it wanted.   In
order to retain the information as to who provide the sensors, I would
like to propose that a standard "sensor_provider" field be added to the
resource_metadata for every sensor where the "sensor_provider" field
would have a string value indicating the driver that provided the sensor
information. This is where the string "ipmi", or a vendor specific
string would be specified.

+1


I understand that this proposed change is not backward compatible with
the existing naming, but I don't really see a good solution that would
retain backward compatibility.
For backward compatibility you could _also_ keep old ones (with ipmi in 
it) for IPMI sensors.




Any/All Feedback will be appreciated,
In this version it makes a lot of sense to me, +1 if Ceilometer folks 
are not against.



Jim




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Import errors in tests

2014-10-02 Thread Dmitry Tantsur

On 10/02/2014 01:30 PM, Lucas Alvares Gomes wrote:

Hi,

I don't know if it's a known issue, but we have this patch in Ironic
here https://review.openstack.org/#/c/124610/ and the gate jobs for
python26 and python27 are failing because of some import error[1] and
it doesn't show me what is the error exactly, it's important to say
also that the tests run locally without any problem so I can't
reproduce the error locally here.

Did you try with fresh environment?



Have anyone seem something like that ?
I have to say that our test toolchain is completely inadequate in case 
of import errors, even locally spotting import error involves manually 
importing all suspicious modules, because tox just outputs garbage. 
Something has to be done with it.




I will continue to dig into it and see if I can spot something, but I
thought it would be nice to share it here too cause that's maybe a
potential gate problem.

[1] 
http://logs.openstack.org/10/124610/14/check/gate-ironic-python27/5c21433/console.html

Cheers,
Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-10-01 Thread Dmitry Tantsur

On 09/30/2014 02:03 PM, Soren Hansen wrote:

2014-09-12 1:05 GMT+02:00 Jay Pipes :

If Nova was to take Soren's advice and implement its data-access layer
on top of Cassandra or Riak, we would just end up re-inventing SQL
Joins in Python-land.


I may very well be wrong(!), but this statement makes it sound like you've
never used e.g. Riak. Or, if you have, not done so in the way it's
supposed to be used.

If you embrace an alternative way of storing your data, you wouldn't just
blindly create a container for each table in your RDBMS.

For example: In Nova's SQL-based datastore we have a table for security
groups and another for security group rules. Rows in the security group
rules table have a foreign key referencing the security group to which
they belong. In a datastore like Riak, you could have a security group
container where each value contains not just the security group
information, but also all the security group rules. No joins in
Python-land necessary.


I've said it before, and I'll say it again. In Nova at least, the SQL
schema is complex because the problem domain is complex. That means
lots of relations, lots of JOINs, and that means the best way to query
for that data is via an RDBMS.


I was really hoping you could be more specific than "best"/"most
appropriate" so that we could have a focused discussion.

I don't think relying on a central data store is in any conceivable way
appropriate for a project like OpenStack. Least of all Nova.

I don't see how we can build a highly available, distributed service on
top of a centralized data store like MySQL.
Coming from Skype background I can assure your that you definitely can, 
depending on your needs (and our experiments with e.g. MongoDB ended 
very badly: it just died under IO loads, that our PostgreSQL treated 
like normal). I mean, that's complex topic and I see a lot of people 
switching to NoSQL and a lot of people switching from. NoSQL is not a 
silver bullet for scalability. Just my 0.5.


/me disappears again



Tens or hundreds of thousands of nodes, spread across many, many racks
and datacentre halls are going to experience connectivity problems[1].

This means that some percentage of your infrastructure (possibly many
thousands of nodes, affecting many, many thousands of customers) will
find certain functionality not working on account of your datastore not
being reachable from the part of the control plane they're attempting to
use (or possibly only being able to read from it).

I say over and over again that people should own their own uptime.
Expect things to fail all the time. Do whatever you need to do to ensure
your service keeps working even when something goes wrong. Of course
this applies to our customers too. Even if we take the greatest care to
avoid downtime, customers should spread their workloads across multiple
availability zones and/or regions and probably even multiple cloud
providers. Their service towards their users is their responsibility.

However, our service towards our users is our responsibility. We should
take the greatest care to avoid having internal problems affect our
users.  Building a massively distributed system like Nova on top of a
centralized data store is practically a guarantee of the opposite.


For complex control plane software like Nova, though, an RDBMS is the
best tool for the job given the current lay of the land in open source
data storage solutions matched with Nova's complex query and
transactional requirements.


What transactional requirements?


Folks in these other programs have actually, you know, thought about
these kinds of things and had serious discussions about alternatives.
It would be nice to have someone acknowledge that instead of snarky
comments implying everyone else "has it wrong".


I'm terribly sorry, but repeating over and over that an RDBMS is "the
best tool" without further qualification than "Nova's data model is
really complex" reads *exactly* like a snarky comment implying everyone
else "has it wrong".

[1]: http://aphyr.com/posts/288-the-network-is-reliable




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Dmitry Tantsur

On 09/25/2014 06:23 PM, Lucas Alvares Gomes wrote:

Hi,

Today we have hit the problem of having an outdated sample
configuration file again[1]. The problem of the sample generation is
that it picks up configuration from other projects/libs
(keystoneclient in that case) and this break the Ironic gate without
us doing anything.

So, what you guys think about removing the test that compares the
configuration files and makes it no longer gate[2]?

We already have a tox command to generate the sample configuration
file[3], so folks that needs it can generate it locally.

Does anyone disagree?
It's a pity we won't have sample config by default, but I guess it can't 
be helped. +1 from me.




[1] https://review.openstack.org/#/c/124090/
[2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
[3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

2014-09-17 Thread Dmitry Tantsur
On Wed, 2014-09-17 at 10:36 +0100, Steven Hardy wrote:
> On Tue, Sep 16, 2014 at 02:06:59PM -0700, Devananda van der Veen wrote:
> > On Tue, Sep 16, 2014 at 12:42 PM, Zane Bitter  wrote:
> > > On 16/09/14 15:24, Devananda van der Veen wrote:
> > >>
> > >> On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter  wrote:
> > >>>
> > >>> On 16/09/14 13:56, Devananda van der Veen wrote:
> > 
> > 
> >  On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy  
> >  wrote:
> > >
> > >
> > > For example, today, I've been looking at the steps required for 
> > > driving
> > > autodiscovery:
> > >
> > > https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
> > >
> > > Driving this process looks a lot like application orchestration:
> > >
> > > 1. Take some input (IPMI credentials and MAC addresses)
> > > 2. Maybe build an image and ramdisk(could drop credentials in)
> > > 3. Interact with the Ironic API to register nodes in maintenance mode
> > > 4. Boot the nodes, monitor state, wait for a signal back containing
> > > some
> > >  data obtained during discovery (same as WaitConditions or
> > >  SoftwareDeployment resources in Heat..)
> > > 5. Shutdown the nodes and mark them ready for use by nova
> > >
> > 
> >  My apologies if the following sounds snarky -- but I think there are a
> >  few misconceptions that need to be cleared up about how and when one
> >  might use Ironic. I also disagree that 1..5 looks like application
> >  orchestration. Step 4 is a workflow, which I'll go into in a bit, but
> >  this doesn't look at all like describing or launching an application
> >  to me.
> > >>>
> > >>>
> > >>>
> > >>> +1 (Although step 3 does sound to me like something that matches Heat's
> > >>> scope.)
> > >>
> > >>
> > >> I think it's a simplistic use case, and Heat supports a lot more
> > >> complexity than is necessary to enroll nodes with Ironic.
> > >>
> > >>>
> >  Step 1 is just parse a text file.
> > 
> >  Step 2 should be a prerequisite to doing -anything- with Ironic. Those
> >  images need to be built and loaded in Glance, and the image UUID(s)
> >  need to be set on each Node in Ironic (or on the Nova flavor, if going
> >  that route) after enrollment. Sure, Heat can express this
> >  declaratively (ironic.node.driver_info must contain key:deploy_kernel
> >  with value:), but are you suggesting that Heat build the images,
> >  or just take the UUIDs as input?
> > 
> >  Step 3 is, again, just parse a text file
> > 
> >  I'm going to make an assumption here [*], because I think step 4 is
> >  misleading. You shouldn't "boot a node" using Ironic -- you do that
> >  through Nova. And you _dont_ get to specify which node you're booting.
> >  You ask Nova to provision an _instance_ on a _flavor_ and it picks an
> >  available node from the pool of nodes that match the request.
> > >>>
> > >>>
> > >>>
> > >>> I think your assumption is incorrect. Steve is well aware that
> > >>> provisioning
> > >>> a bare-metal Ironic server is done through the Nova API. What he's
> > >>> suggesting here is that the nodes would be booted - not Nova-booted, but
> > >>> booted in the sense of having power physically applied - while in
> > >>> maintenance mode in order to do autodiscovery of their capabilities,
> > >>
> > >>
> > >> Except simply applying power doesn't, in itself, accomplish anything
> > >> besides causing the machine to power on. Ironic will only prepare the
> > >> PXE boot environment when initiating a _deploy_.
> > >
> > >
> > > From what I gather elsewhere in this thread, the autodiscovery stuff is a
> > > proposal for the future, not something that exists in Ironic now, and that
> > > may be the source of the confusion.
> > >
> > > In any case, the etherpad linked at the top of this email was written by
> > > someone in the Ironic team and _clearly_ describes PXE booting a 
> > > "discovery
> > > image" in maintenance mode in order to obtain hardware information about 
> > > the
> > > box.
> > >
> > 
> > Huh. I should have looked at that earlier in the discussion. It is
> > referring to out-of-tree code whose spec was not approved during Juno.
> > 
> > Apparently, and unfortunately, throughout much of this discussion,
> > folks have been referring to potential features Ironic might someday
> > have, whereas I have been focused on the features we actually support
> > today. That is probably why it seems we are "talking past each other."
> 
> FWIW I think a big part of the problem has been that you've been focussing
> on the fact that my solution doesn't match your preconceived ideas of how
> Ironic should interface with the world, while completely ignoring the
> use-case, e.g the actual problem I'm trying to solve.
> 
> That is why I'm referring to features Ironic might someday have - because
> Ironic currently does not solve my prob

Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

2014-09-17 Thread Dmitry Tantsur
On Tue, 2014-09-16 at 15:42 -0400, Zane Bitter wrote:
> On 16/09/14 15:24, Devananda van der Veen wrote:
> > On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter  wrote:
> >> On 16/09/14 13:56, Devananda van der Veen wrote:
> >>>
> >>> On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy  wrote:
> 
>  For example, today, I've been looking at the steps required for driving
>  autodiscovery:
> 
>  https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
> 
>  Driving this process looks a lot like application orchestration:
> 
>  1. Take some input (IPMI credentials and MAC addresses)
>  2. Maybe build an image and ramdisk(could drop credentials in)
>  3. Interact with the Ironic API to register nodes in maintenance mode
>  4. Boot the nodes, monitor state, wait for a signal back containing some
>   data obtained during discovery (same as WaitConditions or
>   SoftwareDeployment resources in Heat..)
>  5. Shutdown the nodes and mark them ready for use by nova
> 
> >>>
> >>> My apologies if the following sounds snarky -- but I think there are a
> >>> few misconceptions that need to be cleared up about how and when one
> >>> might use Ironic. I also disagree that 1..5 looks like application
> >>> orchestration. Step 4 is a workflow, which I'll go into in a bit, but
> >>> this doesn't look at all like describing or launching an application
> >>> to me.
> >>
> >>
> >> +1 (Although step 3 does sound to me like something that matches Heat's
> >> scope.)
> >
> > I think it's a simplistic use case, and Heat supports a lot more
> > complexity than is necessary to enroll nodes with Ironic.
> >
> >>
> >>> Step 1 is just parse a text file.
> >>>
> >>> Step 2 should be a prerequisite to doing -anything- with Ironic. Those
> >>> images need to be built and loaded in Glance, and the image UUID(s)
> >>> need to be set on each Node in Ironic (or on the Nova flavor, if going
> >>> that route) after enrollment. Sure, Heat can express this
> >>> declaratively (ironic.node.driver_info must contain key:deploy_kernel
> >>> with value:), but are you suggesting that Heat build the images,
> >>> or just take the UUIDs as input?
> >>>
> >>> Step 3 is, again, just parse a text file
> >>>
> >>> I'm going to make an assumption here [*], because I think step 4 is
> >>> misleading. You shouldn't "boot a node" using Ironic -- you do that
> >>> through Nova. And you _dont_ get to specify which node you're booting.
> >>> You ask Nova to provision an _instance_ on a _flavor_ and it picks an
> >>> available node from the pool of nodes that match the request.
> >>
> >>
> >> I think your assumption is incorrect. Steve is well aware that provisioning
> >> a bare-metal Ironic server is done through the Nova API. What he's
> >> suggesting here is that the nodes would be booted - not Nova-booted, but
> >> booted in the sense of having power physically applied - while in
> >> maintenance mode in order to do autodiscovery of their capabilities,
> >
> > Except simply applying power doesn't, in itself, accomplish anything
> > besides causing the machine to power on. Ironic will only prepare the
> > PXE boot environment when initiating a _deploy_.
> 
>  From what I gather elsewhere in this thread, the autodiscovery stuff is 
> a proposal for the future, not something that exists in Ironic now, and 
> that may be the source of the confusion.
> 
> In any case, the etherpad linked at the top of this email was written by 
> someone in the Ironic team and _clearly_ describes PXE booting a 
> "discovery image" in maintenance mode in order to obtain hardware 
> information about the box.
If was written by me and it seems to be my fault that I didn't state
there more clear that this work is not and probably will not be merged
into Ironic upstream. Sorry for the confusion.

That said, my experiments proved quite possible (though not without some
network-related hacks as of now) to follow these steps to collect (aka
discover) hardware information required for scheduling from a node,
knowing only it's IPMI credentials.

> 
> cheers,
> Zane.
> 
> >> which
> >> is presumably hard to do automatically when they're turned off.
> >
> > Vendors often have ways to do this while the power is turned off, eg.
> > via the OOB management interface.
> >
> >> He's also
> >> suggesting that Heat could drive this process, which I happen to disagree
> >> with because it is a workflow not an end state.
> >
> > +1
> >
> >> However the main takeaway
> >> here is that you guys are talking completely past one another, and have 
> >> been
> >> for some time.
> >>
> >
> > Perhaps more detail in the expected interactions with Ironic would be
> > helpful and avoid me making (perhaps incorrect) assumptions.
> >
> > -D
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> ___

Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

2014-09-15 Thread Dmitry Tantsur
On Mon, 2014-09-15 at 11:04 -0700, Jim Rollenhagen wrote:
> On Mon, Sep 15, 2014 at 12:44:24PM +0100, Steven Hardy wrote:
> > All,
> > 
> > Starting this thread as a follow-up to a strongly negative reaction by the
> > Ironic PTL to my patches[1] adding initial Heat->Ironic integration, and
> > subsequent very detailed justification and discussion of why they may be
> > useful in this spec[2].
> > 
> > Back in Atlanta, I had some discussions with folks interesting in making
> > "ready state"[3] preparation of bare-metal resources possible when
> > deploying bare-metal nodes via TripleO/Heat/Ironic.
> > 
> > The initial assumption is that there is some discovery step (either
> > automatic or static generation of a manifest of nodes), that can be input
> > to either Ironic or Heat.
> 
> We've discussed this a *lot* within Ironic, and have decided that
> auto-discovery (with registration) is out of scope for Ironic. 
Even if there is such an agreement, it's the first time I hear about it.
All previous discussions _I'm aware of_ (e.g. midcycle) ended up with
"we can discover only things that are required for scheduling". When did
it change?

> In my
> opinion, this is straightforward enough for operators to write small
> scripts to take a CSV/JSON/whatever file and register the nodes in that
> file with Ironic. This is what we've done at Rackspace, and it's really
> not that annoying; the hard part is dealing with incorrect data from
> the (vendor|DC team|whatever).
Provided this CSV contains all the required data, not only IPMI
credentials, which IIRC is often the case.

> 
> That said, I like the thought of Ironic having a bulk-registration
> feature with some sort of specified format (I imagine this would just be
> a simple JSON list of node objects).
> 
> We are likely doing a session on discovery in general in Paris. It seems
> like the main topic will be about how to interface with external
> inventory management systems to coordinate node discovery. Maybe Heat is
> a valid tool to integrate with here, maybe not.
> 
> > Following discovery, but before an undercloud deploying OpenStack onto the
> > nodes, there are a few steps which may be desired, to get the hardware into
> > a state where it's ready and fully optimized for the subsequent deployment:
> 
> These pieces are mostly being done downstream, and (IMO) in scope for
> Ironic in the Kilo cycle. More below.
> 
> > - Updating and aligning firmware to meet requirements of qualification or
> >   site policy
> 
> Rackspace does this today as part of what we call "decommissioning".
> There are patches up for review for both ironic-python-agent (IPA) [1] and
> Ironic [2] itself. We have support for 1) flashing a BIOS on a node, and
> 2) Writing a set of BIOS settings to a node (these are embedded in the agent
> image as a set, not through an Ironic API). These are both implemented as
> a hardware manager plugin, and so can easily be vendor-specific.
> 
> I expect this to land upstream in the Kilo release.
> 
> > - Optimization of BIOS configuration to match workloads the node is
> >   expected to run
> 
> The Ironic team has also discussed this, mostly at the last mid-cycle
> meetup. We'll likely have a session on "capabilities", which we think
> might be the best way to handle this case. Essentially, a node can be
> tagged with arbitrary capabilities, e.g. "hypervisor", which Nova
> (flavors?) could use for scheduling, and Ironic drivers could use to do
> per-provisioning work, like setting BIOS settings. This may even tie in
> with the next point.
> 
> Looks like Jay just ninja'd me a bit on this point. :)
> 
> > - Management of machine-local storage, e.g configuring local RAID for
> >   optimal resilience or performance.
> 
> I don't see why Ironic couldn't do something with this in Kilo. It's
> dangerously close to the "inventory management" line, however I think
> it's reasonable for a user to specify that his or her root partition
> should be on a RAID or a specific disk out of many in the node.
> 
> > Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
> > of these steps possible, but there's no easy way to either encapsulate the
> > (currently mostly vendor specific) data associated with each step, or to
> > coordinate sequencing of the steps.
> 
> It's important to remember that just because a blueprint/spec exists,
> does not mean it will be approved. :) I don't expect the "DRAC
> discovery" blueprint to go through, and the "DRAC RAID" blueprint is
> questionable, with regards to scope.
> 
> > What is required is some tool to take a text definition of the required
> > configuration, turn it into a correctly sequenced series of API calls to
> > Ironic, expose any data associated with those API calls, and declare
> > success or failure on completion.  This is what Heat does.
> 
> This is a fair point, however none of these use cases have code landed
> in mainline Ironic, and certainly don't have APIs exposed, with the
> exc

Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-07 Thread Dmitry Tantsur
Hi!

On Tue, 2014-08-05 at 12:33 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> The following idea came out of last week's midcycle for how to improve
> our spec process and tracking on launchpad. I think most of us liked
> it, but of course, not everyone was there, so I'll attempt to write
> out what I recall.
> 
> 
> This would apply to new specs proposed for Kilo (since the new spec
> proposal deadline has already passed for Juno).
> 
> 
> 
> 
> First, create a blueprint in launchpad and populate it with your
> spec's heading. Then, propose a spec with just the heading (containing
> a link to the BP), Problem Description, and first paragraph outlining
> your Proposed change. 
> 
> 
> This will be given an initial, high-level review to determine whether
> it is in scope and in alignment with project direction, which will be
> reflected on the review comments, and, if affirmed, by setting the
> blueprint's "Direction" field to "Approved".

How will we formally track it in Gerrit? By having several +1's by spec
cores? Or will it be done by you (I guess only you can update
"Direction" in LP)?

> 
> 
> At this point, if affirmed, you should proceed with filling out the
> entire spec, and the remainder of the process will continue as it was
> during Juno. Once the spec is approved, update launchpad to set the
> specification URL to the spec's location on
> https://specs.openstack.org/openstack/ironic-specs/ and a member of
> the team (probably me) will update the release target, priority, and
> status.
> 
> 
> 
> 
> I believe this provides two benefits. First, it should give quicker
> initial feedback to proposer if their change is going to be in/out of
> scope, which can save considerable time if the proposal is out of
> scope. Second, it allows us to track well-aligned specs on Launchpad
> before they are completely approved. We observed that several specs
> were approved at nearly the same time as the code was approved. Due to
> the way we were using LP this cycle, it meant that LP did not reflect
> the project's direction in advance of landing code, which is not what
> we intended. This may have been confusing, and I think this will help
> next cycle. FWIW, several other projects have observed a similar
> problem with spec<->launchpad interaction, and are adopting similar
> practices for Kilo.
> 
> 
> 
> 
> Comments/discussion welcome!

I'm +1 to the idea, just some concerns about the implementation:
1. We don't have any "pre-approved" state in Gerrit - need agreement on
when to continue (see above)
2. We'll need to speed up spec reviews, because we're adding one more
blocker on the way to the code being merged :) Maybe it's no longer a
problem actually, we're doing it faster now.

> 
> 
> 
> -Deva
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco Driver Blueprint

2014-08-07 Thread Dmitry Tantsur
Hi!

Didn't read the spec thoroughly, but I'm concerned by it's huge scope.
It's actually several specs squashed into one (not too detailed). My
vote is splitting it into a chain of specs (at least 3: power driver,
discovery, other configurations) and seek exception separately.
Actually, I'm +1 on making exception for power driver, but -0 on the
others, until I see a separate spec for them.

Dmitry.

On Thu, 2014-08-07 at 09:30 +0530, GopiKrishna Saripuri wrote:
> Hi,
> 
> 
> I've submitted Ironic Cisco driver blueprint post proposal freeze
> date. This driver is critical for Cisco and few customers to test as
> part of their private cloud expansion. The driver implementation is
> ready along with unit-tests. Will submit the code for review once
> blueprint is accepted. 
> 
> 
> The Blueprint review link: https://review.openstack.org/#/c/110217/
> 
> 
> Please let me know If its possible to include this in Juno release.
> 
> 
> 
> Regards
> GopiKrishna S
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] help

2014-07-31 Thread Dmitry Tantsur
Hi!

This list is not for usage question, it's for OpenStack developers. The
best way to get a quick help should be using
https://ask.openstack.org/en/questions/ or joining #openstack on
Freenode and asking there.

Good luck!

On Thu, 2014-07-31 at 15:59 +0530, shailendra acharya wrote:
> hello folks,
>   this is shailendra acharya. i m trying to install openstack
> icehouse in centos6.5. but i got stuck and tried almost every link
> which was suggested to me by google. i have last hope to u. 
> when i come to create user using keystone cmd as written in openstack
> installation manual
>keystone user-create --name=admin --pass=ADMIN_PASS
> --email=ADMIN_EMAIL
> 
>  
> i replaced email and pass but when i press enter it shows 
> invalid credential error  plz do dsomething asap
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How to get testr to failfast

2014-07-31 Thread Dmitry Tantsur
Hi!

On Thu, 2014-07-31 at 10:45 +0100, Chris Dent wrote:
> One of the things I like to be able to do when in the middle of making
> changes is sometimes run all the tests to make sure I haven't accidentally
> caused some unexpected damage in the neighborhood. If I have I don't
> want the tests to all run, I'd like to exit on first failure.

This makes even more sense, if you _know_ that you've broken a lot of
things and want to deal with it case-by-case. At least for me it's more
convenient, I believe many will prefer getting all the errors at once.

>  This
> is a common feature in lots of testrunners but I can't seem to find
> a way to make it happen when testr is integrated with setuptools.
> 
> Any one know a way?
> 
> There's this:
>https://bugs.launchpad.net/testrepository/+bug/1211926
> 
> But it is not clear how or where to effectively pass the right argument,
> either from the command line or in tox.ini.
> 
> Even if you don't know a way, I'd like to hear from other people who
> would like it to be possible. It's one of several testing habits I
> have from previous worlds that I'm missing and doing a bit of
> commiseration would be a nice load off.

It would be my 2nd wanted feature in our test system (after getting
reasonable error message (at least not binary) in case of import
errors :)

> 
> Thanks.
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating David Shrewsbury to ironic-core

2014-07-14 Thread Dmitry Tantsur
+1 

On Fri, 2014-07-11 at 15:50 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> While David (Shrews) only began working on Ironic in earnest four
> months ago, he has been working on some of the tougher problems with
> our Tempest coverage and the Nova<->Ironic interactions. He's also
> become quite active in reviews and discussions on IRC, and
> demonstrated a good understanding of the challenges facing Ironic
> today. I believe he'll also make a great addition to the core team.
> 
> 
> Below are his stats for the last 90 days.
> 
> 
> Cheers,
> Devananda
> 
> 
> +--+---++
> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> +--+---++
> 
> 
> 30
> | dshrews  |  470  11  36   0   076.6% |
>7 ( 14.9%)  |
> 
> 
> 
> 60
> | dshrews  |  910  14  77   0   084.6% |
> 15 ( 16.5%)  |
> 
> 
> 90
> | dshrews  | 1210  21 100   0   082.6% |
> 16 ( 13.2%)  |
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Jim Rollenhagen to ironic-core

2014-07-14 Thread Dmitry Tantsur
+1, much awaited!

On Fri, 2014-07-11 at 15:50 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> It's time to grow the team :)
> 
> 
> Jim (jroll) started working with Ironic at the last mid-cycle, when
> "teeth" became ironic-python-agent. In the time since then, he's
> jumped into Ironic to help improve the project as a whole. In the last
> few months, in both reviews and discussions on IRC, I have seen him
> consistently demonstrate a solid grasp of Ironic's architecture and
> its role within OpenStack, contribute meaningfully to design
> discussions, and help many other contributors. I think he will be a
> great addition to the core review team.
> 
> 
> Below are his review stats for Ironic, as calculated by the
> openstack-infra/reviewstats project with local modification to remove
> ironic-python-agent, so we can see his activity in the main project.
> 
> 
> Cheers,
> Devananda
> 
> 
> +--+---++
> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> +--+---++
> 
> 
> 30
> |  jimrollenhagen  |  290   8  21   0   072.4% |
>5 ( 17.2%)  |
> 
> 
> 60
> |  jimrollenhagen  |  760  16  60   0   078.9% |
> 13 ( 17.1%)  |
> 
> 
> 90
> |  jimrollenhagen  | 1060  27  79   0   074.5% |
> 25 ( 23.6%)  |
> 
> 
> 180
> |  jimrollenhagen  | 1570  41 116   0   073.9% |
> 35 ( 22.3%)  |
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 21:51 +0800, 严超 wrote:
> Yes, but when you assign a "production" image to an ironic bare metal
> node. You should provide ramdisk_id and kernel_id. 
What do you mean by "assign" here? Could you quote some documentation?
Instance image is "assigned" using --image argument to `nova boot`, k&r
are fetched from it's metadata.

Deploy k&r are currently taken from flavor provided by --flavor argument
(this will change eventually).
If you're using e.g. DevStack, you don't even touch deploy k&r, they're
bound to flavor "baremetal".

Please see quick start guide for hints on this:
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html

> 
> Should the ramdisk_id and kernel_id be the same as deploy images (aka
> the first set of k+r) ?
> 
> You didn't answer me if the two sets of r + k should be the same ? 
> 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> 
> 
> 2014-06-04 21:27 GMT+08:00 Dmitry Tantsur :
> On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
> > Thank you !
> >
> > I noticed the two sets of k+r in tftp configuration of
> ironic.
> >
> > Should the two sets be the same k+r ?
> 
> Deploy images are created for you by DevStack/whatever. If you
> do it by
> hand, you may use diskimage-builder. Currently they are stored
> in flavor
> metadata, will be stored in node metadata later.
> 
> And than you have "production" images that are whatever you
> want to
> deploy and they are stored in Glance metadata for the instance
> image.
> 
> TFTP configuration should be created automatically, I doubt
> you should
> change it anyway.
> 
> >
> > The first set is defined in the ironic node definition.
> >
> > How do we define the second set correctly ?
> >
> > Best Regards!
> > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> >
> >
> > 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur
> :
> > On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> > > Hi,
> > >
> > > Thank you very much for your reply !
> > >
> > > But there are still some questions for me. Now
> I've come to
> > the step
> > > where ironic partitions the disk as you replied.
> > >
> > > Then, how does ironic copies an image ? I know the
> image
> > comes from
> > > glance. But how to know image is really available
> when
> > reboot?
> >
> > I don't quite understand your question, what do you
> mean by
> > "available"?
> > Anyway, before deploying Ironic downloads image from
> Glance,
> > caches it
> > and just copies to a mounted iSCSI partition (using
> dd or so).
> >
> > >
> > > And, what are the differences between final kernel
> (ramdisk)
> > and
> > > original kernel (ramdisk) ?
> >
> > We have 2 sets of kernel+ramdisk:
> > 1. Deploy k+r: these are used only for deploy
> process itself
> > to provide
> > iSCSI volume and call back to Ironic. There's
> ongoing effort
> > to create
> > smarted ramdisk, called Ironic Python Agent, but
>     it's WIP.
> > 2. Your k+r as stated in Glance metadata for an
> image - they
> > will be
> > used for booting after deployment.
> >
> > >
> > > Best Regards!
> > > Chao Yan
> > > --
> > > My twitter:Andy Yan @yanchao727
> > > My Weibo:http:

Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
> Thank you !
> 
> I noticed the two sets of k+r in tftp configuration of ironic.
> 
> Should the two sets be the same k+r ?
Deploy images are created for you by DevStack/whatever. If you do it by
hand, you may use diskimage-builder. Currently they are stored in flavor
metadata, will be stored in node metadata later.

And than you have "production" images that are whatever you want to
deploy and they are stored in Glance metadata for the instance image.

TFTP configuration should be created automatically, I doubt you should
change it anyway.

> 
> The first set is defined in the ironic node definition. 
> 
> How do we define the second set correctly ? 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> ------
> 
> 
> 
> 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur :
> On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> > Hi,
> >
> > Thank you very much for your reply !
> >
> > But there are still some questions for me. Now I've come to
> the step
> > where ironic partitions the disk as you replied.
> >
> > Then, how does ironic copies an image ? I know the image
> comes from
> > glance. But how to know image is really available when
> reboot?
> 
> I don't quite understand your question, what do you mean by
> "available"?
> Anyway, before deploying Ironic downloads image from Glance,
> caches it
> and just copies to a mounted iSCSI partition (using dd or so).
> 
> >
> > And, what are the differences between final kernel (ramdisk)
> and
> > original kernel (ramdisk) ?
> 
> We have 2 sets of kernel+ramdisk:
> 1. Deploy k+r: these are used only for deploy process itself
> to provide
> iSCSI volume and call back to Ironic. There's ongoing effort
> to create
> smarted ramdisk, called Ironic Python Agent, but it's WIP.
> 2. Your k+r as stated in Glance metadata for an image - they
> will be
> used for booting after deployment.
> 
> >
> > Best Regards!
>     > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> >
> >
> > 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur
> :
> > Hi!
> >
> > Workflow is not entirely documented by now AFAIK.
> After PXE
> > boots deploy
> > kernel and ramdisk, it exposes hard drive via iSCSI
> and
> > notifies Ironic.
> > After that Ironic partitions the disk, copies an
> image and
> > reboots node
> > with final kernel and ramdisk.
> >
> > On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> > > Hi, All:
> > >
> > > I searched a lot about how ironic
> automatically
> > install image
> > > on bare metal. But there seems to be no clear
> workflow out
> > there.
> > >
> > > What I know is, in traditional PXE, a bare
> metal
> > pull image
> > > from PXE server using tftp. In tftp root, there is
> a ks.conf
> > which
> > > tells tftp which image to kick start.
> > >
> > > But in ironic there is no ks.conf pointed
> in tftp.
> > How do bare
> > > metal know which image to install ? Is there any
> clear
> > workflow where
> > > I can read ?
> > >
> > >
> > >
> > >
> > > Best Regards!
> > > Chao Yan
> > > --
> > > My twitter:Andy Yan @yanchao727
> > > My Weibo:http://weibo.com/herewearenow
> >   

Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> Hi,
> 
> Thank you very much for your reply !
> 
> But there are still some questions for me. Now I've come to the step
> where ironic partitions the disk as you replied.
> 
> Then, how does ironic copies an image ? I know the image comes from
> glance. But how to know image is really available when reboot? 
I don't quite understand your question, what do you mean by "available"?
Anyway, before deploying Ironic downloads image from Glance, caches it
and just copies to a mounted iSCSI partition (using dd or so).

> 
> And, what are the differences between final kernel (ramdisk) and
> original kernel (ramdisk) ? 
We have 2 sets of kernel+ramdisk:
1. Deploy k+r: these are used only for deploy process itself to provide
iSCSI volume and call back to Ironic. There's ongoing effort to create
smarted ramdisk, called Ironic Python Agent, but it's WIP.
2. Your k+r as stated in Glance metadata for an image - they will be
used for booting after deployment.

> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> 
> 
> 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur :
> Hi!
> 
> Workflow is not entirely documented by now AFAIK. After PXE
> boots deploy
> kernel and ramdisk, it exposes hard drive via iSCSI and
> notifies Ironic.
> After that Ironic partitions the disk, copies an image and
> reboots node
> with final kernel and ramdisk.
> 
> On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> > Hi, All:
> >
> > I searched a lot about how ironic automatically
> install image
> > on bare metal. But there seems to be no clear workflow out
> there.
> >
> > What I know is, in traditional PXE, a bare metal
> pull image
> > from PXE server using tftp. In tftp root, there is a ks.conf
> which
> > tells tftp which image to kick start.
> >
> > But in ironic there is no ks.conf pointed in tftp.
> How do bare
> > metal know which image to install ? Is there any clear
> workflow where
> > I can read ?
> >
> >
> >
> >
> > Best Regards!
> > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
Hi!

Workflow is not entirely documented by now AFAIK. After PXE boots deploy
kernel and ramdisk, it exposes hard drive via iSCSI and notifies Ironic.
After that Ironic partitions the disk, copies an image and reboots node
with final kernel and ramdisk.

On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> Hi, All:
> 
> I searched a lot about how ironic automatically install image
> on bare metal. But there seems to be no clear workflow out there.
> 
> What I know is, in traditional PXE, a bare metal pull image
> from PXE server using tftp. In tftp root, there is a ks.conf which
> tells tftp which image to kick start.
> 
> But in ironic there is no ks.conf pointed in tftp. How do bare
> metal know which image to install ? Is there any clear workflow where
> I can read ?
> 
> 
> 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Review dashboard update

2014-06-03 Thread Dmitry Tantsur
Hi everyone!

It's hard to stop polishing things, and today I got an updated review
dashboard. It's sources are merged to Sean Dague's repository [1], so I
expect this to be the final version. Thank you everyone for numerous
comments and suggestions, especially Ruby Loo.

Here is nice link to it: http://perm.ly/ironic-review-dashboard

Major changes since previous edition:
- "My Patches Requiring Attention" section - all your patches that are
either WIP or have any -1.
- "Needs Reverify" - approved changes that failed Jenkins verification
- Added last section with changes that either WIP or got -1 from Jenkins
(all other sections do not include these).
- Specs section show also WIP specs

I know someone requesting dashboard with IPA subproject highlighted - I
can do such things on case-by-case base - ping me on IRC.

Hope this will be helpful :)

Dmitry.

[1] https://github.com/sdague/gerrit-dash-creator


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Proposal for shared review dashboard

2014-06-02 Thread Dmitry Tantsur
Hi folks,

Inspired by great work by Sean Dague [1], I have created a review
dashboard for Ironic projects. Main ideas:

Ordering:
0. Viewer's own patches, that have any kind of negative feedback
1. Specs
2. Changes w/o negative feedback, with +2 already
3. Changes that did not have any feedback for 5 days
4. Changes without negative feedback (no more than 50)
5. Other changes (no more than 20)

Shows only verified patches, except for 0 and 5.
Never shows WIP patches.

I'll be thankful for any tips on how to include prioritization from
Launchpad bugs.

Short link: http://goo.gl/hqRrRw
Long link: [2]

Source code (will create PR after discussion on today's meeting): 
https://github.com/Divius/gerrit-dash-creator
To generate a link, use:
$ ./gerrit-dash-creator dashboards/ironic.dash

Dmitry.

[1] https://github.com/Divius/gerrit-dash-creator
[2] https://review.openstack.org/#/dashboard/?foreach=%28project%
3Aopenstack%2Fironic+OR+project%3Aopenstack%2Fpython-ironicclient+OR
+project%3Aopenstack%2Fironic-python-agent+OR+project%3Aopenstack%
2Fironic-specs%29+status%3Aopen+NOT+label%3AWorkflow%3C%3D-1+NOT+label%
3ACode-Review%3C%3D-2+NOT+label%3AWorkflow%3E%3D1&title=Ironic+Inbox&My
+Patches+Requiring+Attention=owner%3Aself+%28label%3AVerified-1%
252cjenkins+OR+label%3ACode-Review-1%29&Ironic+Specs=NOT+owner%3Aself
+project%3Aopenstack%2Fironic-specs&Needs+Approval=label%3AVerified%3E%
3D1%252cjenkins+NOT+owner%3Aself+label%3ACode-Review%3E%3D2+NOT+label%
3ACode-Review-1&5+Days+Without+Feedback=label%3AVerified%3E%3D1%
252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%2Fironic-specs+NOT
+label%3ACode-Review%3C%3D2+age%3A5d&No+Negative+Feedback=label%
3AVerified%3E%3D1%252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%
2Fironic-specs+NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%
3E%3D2+limit%3A50&Other=label%3AVerified%3E%3D1%252cjenkins+NOT+owner%
3Aself+NOT+project%3Aopenstack%2Fironic-specs+label%3ACode-Review-1
+limit%3A20



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Dmitry Tantsur
A task scheduler responsibility: this is basically a
> state check before task is scheduled, and it should be
> done one more time once the task is started, as
> mentioned above.
>  
> c) Can we somehow detect duplicated requests
> and ignore them?
>E.g. we won't want user to make 2-3-4
> reboots in a row just because
> the user
>was not patient enough.
> 
> 
> Queue similar tasks. All the users will be pointed to
> the similar task resource, or maybe to a different
> resources which tied to the same conductor action. 
>  
> Best regards,
> Max Lobur,
> Python Developer, Mirantis, Inc.
> Mobile: +38 (093) 665 14 28
> Skype: max_lobur
> 38, Lenina ave. Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
> 
> 
> On Wed, May 28, 2014 at 5:10 PM, Lucas Alvares
> Gomes  wrote:
> On Wed, May 28, 2014 at 2:02 PM, Dmitry
> Tantsur  wrote:
> > Hi Ironic folks, hi Devananda!
> >
> > I'd like to share with you my thoughts on
> asynchronous API, which is
> > spec https://review.openstack.org/#/c/94923
> > First I was planned this as comments to the
> review, but it proved to be
> > much larger, so I post it for discussion on
> ML.
> >
> > Here is list of different consideration, I'd
> like to take into account
> > when prototyping async support, some are
> reflected in spec already, some
> > are from my and other's comments:
> >
> > 1. "Executability"
> > We need to make sure that request can be
> theoretically executed,
> > which includes:
> > a) Validating request body
> > b) For each of entities (e.g. nodes)
> touched, check that they are
> > available
> >at the moment (at least exist).
> >This is arguable, as checking for entity
> existence requires going to
> > DB.
> 
> >
> 
> > 2. Appropriate state
> > For each entity in question, ensure that
> it's either in a proper state
> > or
> > moving to a proper state.
> > It would help avoid users e.g. setting
> deploy twice on the same node
> > It will still require some kind of
> NodeInAWrongStateError, but we won't
> > necessary need a client retry on this one.
> >
> > Allowing the entity to be _moving_ to
> appropriate state gives us a
> > problem:
> > Imagine OP1 was running and OP2 got
> scheduled, hoping that OP1 will come
> > to desired state. What if OP1 fails? What if
> conductor, doing OP1
> > crashes?
> > That's why we may want to approve only
> operations on entities that do
> > not
> > undergo state changes. What do you think?
> >
> > Similar problem with checking node state.
> > Imagine we schedule OP2 while we had OP1 -
>  

[openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Dmitry Tantsur
Hi Ironic folks, hi Devananda!

I'd like to share with you my thoughts on asynchronous API, which is
spec https://review.openstack.org/#/c/94923
First I was planned this as comments to the review, but it proved to be
much larger, so I post it for discussion on ML.

Here is list of different consideration, I'd like to take into account
when prototyping async support, some are reflected in spec already, some
are from my and other's comments:

1. "Executability"
We need to make sure that request can be theoretically executed,
which includes:
a) Validating request body
b) For each of entities (e.g. nodes) touched, check that they are
available
   at the moment (at least exist).
   This is arguable, as checking for entity existence requires going to
DB.

2. Appropriate state
For each entity in question, ensure that it's either in a proper state
or
moving to a proper state.
It would help avoid users e.g. setting deploy twice on the same node
It will still require some kind of NodeInAWrongStateError, but we won't
necessary need a client retry on this one.

Allowing the entity to be _moving_ to appropriate state gives us a
problem:
Imagine OP1 was running and OP2 got scheduled, hoping that OP1 will come
to desired state. What if OP1 fails? What if conductor, doing OP1
crashes?
That's why we may want to approve only operations on entities that do
not
undergo state changes. What do you think?

Similar problem with checking node state.
Imagine we schedule OP2 while we had OP1 - regular checking node state.
OP1 discovers that node is actually absent and puts it to maintenance
state.
What to do with OP2?
a) Obvious answer is to fail it
b) Can we make client wait for the results of periodic check?
   That is, wait for OP1 _before scheduling_ OP2?

Anyway, this point requires some state framework, that knows about
states,
transitions, actions and their compatibility with each other.

3. Status feedback
People would like to know, how things are going with their task.
What they know is that their request was scheduled. Options:
a) Poll: return some REQUEST_ID and expect users to poll some endpoint.
   Pros:
   - Should be easy to implement
   Cons:
   - Requires persistent storage for tasks. Does AMQP allow to do this
kinds
 of queries? If not, we'll need to duplicate tasks in DB.
   - Increased load on API instances and DB
b) Callback: take endpoint, call it once task is done/fails.
   Pros:
   - Less load on both client and server
   - Answer exactly when it's ready
   Cons:
   - Will not work for cli and similar
   - If conductor crashes, there will be no callback.

Seems like we'd want both (a) and (b) to comply with current needs.

If we have a state framework from (2), we can also add notifications to
it.

4. Debugging consideration
a) This is an open question: how to debug, if we have a lot of requests
   and something went wrong?
b) One more thing to consider: how to make command like `node-show`
aware of
   scheduled transitioning, so that people don't try operations that are
   doomed to failure.

5. Performance considerations
a) With async approach, users will be able to schedule nearly unlimited
   number of tasks, thus essentially blocking work of Ironic, without
any
   signs of the problem (at least for some time).
   I think there are 2 common answers to this problem:
   - Request throttling: disallow user to make too many requests in some
 amount of time. Send them 503 with Retry-After header set.
   - Queue management: watch queue length, deny new requests if it's too
large.
   This means actually getting back error 503 and will require retrying
again!
   At least it will be exceptional case, and won't affect Tempest run...
b) State framework from (2), if invented, can become a bottleneck as
well.
   Especially with polling approach.

6. Usability considerations
a) People will be unaware, when and whether their request is going to be
   finished. As there will be tempted to retry, we may get flooded by
   duplicates. I would suggest at least make it possible to request
canceling
   any task (which will be possible only if it is not started yet,
obviously).
b) We should try to avoid scheduling contradictive requests.
c) Can we somehow detect duplicated requests and ignore them?
   E.g. we won't want user to make 2-3-4 reboots in a row just because
the user
   was not patient enough.

--

Possible takeaways from this letter:
- We'll need at least throttling to avoid DoS
- We'll still need handling of 503 error, though it should not happen
under
  normal conditions
- Think about state framework that unifies all this complex logic with
features:
  * Track entities, their states and actions on entities
  * Check whether new action is compatible with states of entities it
touches
and with other ongoing and scheduled actions on these entities.
  * Handle notifications for finished and failed actions by providing
both
pull and push approaches.
  * Track whether started action is still executed,

Re: [openstack-dev] [Ironic] [TripleO] virtual-ironic job now voting!

2014-05-25 Thread Dmitry Tantsur
Great news! Even being non-voting, it already helped me 2-3 times to
spot a subtle error in a patch.

On Fri, 2014-05-23 at 18:56 -0700, Devananda van der Veen wrote:
> Just a quick heads up to everyone -- the tempest-dsvm-virtual-ironic
> job is now fully voting in both check and gate queues for Ironic. It's
> also now symmetrically voting on diskimage-builder, since that tool is
> responsible for building the deploy ramdisk used by this test.
> 
> 
> Background: We discussed this prior to the summit, and agreed to
> continue watching the stability of the job through the summit week.
> It's been reliable for over a month now, and I've seen it catch
> several real issues, both in Ironic and in other projects, and all the
> core reviewers I spoke lately have been eager to enable voting on this
> test. So, it's done!
> 
> 
> Cheers,
> Devananda
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Dmitry Tantsur
On Thu, 2014-05-22 at 09:48 +0100, Lucas Alvares Gomes wrote:
> On Thu, May 22, 2014 at 1:03 AM, Devananda van der Veen
>  wrote:
> > I'd like to bring up the topic of drivers which, for one reason or another,
> > are probably never going to have third party CI testing.
> >
> > Take for example the iBoot driver proposed here:
> >   https://review.openstack.org/50977
> >
> > I would like to encourage this type of driver as it enables individual
> > contributors, who may be using off-the-shelf or home-built systems, to
> > benefit from Ironic's ability to provision hardware, even if that hardware
> > does not have IPMI or another enterprise-grade out-of-band management
> > interface. However, I also don't expect the author to provide a full
> > third-party CI environment, and as such, we should not claim the same level
> > of test coverage and consistency as we would like to have with drivers in
> > the gate.
> 
> +1
But we'll still expect unit tests that work via mocking their 3rd party
library (for example), right?

> 
> >
> > As it is, Ironic already supports out-of-tree drivers. A python module that
> > registers itself with the appropriate entrypoint will be made available if
> > the ironic-conductor service is configured to load that driver. For what
> > it's worth, I recall Nova going through a very similar discussion over the
> > last few cycles...
> >
> > So, why not just put the driver in a separate library on github or
> > stackforge?
> 
> I would like to have this drivers within the Ironic tree under a
> separated directory (e.g /drivers/staging/, not exactly same but kinda
> like what linux has in their tree[1]). The advatanges of having it in
> the main ironic tree is because it makes it easier to other people
> access the drivers, easy to detect and fix changes in the Ironic code
> that would affect the driver, share code with the other drivers, add
> unittests and provide a common place for development.
I do agree, that having these drivers in-tree would make major changes
much easier for us (see also above about unit tests).

> 
> We can create some rules for people who are thinking about submitting
> their driver under the staging directory, it should _not_ be a place
> where you just throw the code and forget it, we would need to agree
> that the person submitting the code will also babysit it, we also
> could use the same process for all the other drivers wich wants to be
> in the Ironic tree to be accepted which is going through ironic-specs.
+1

> 
> Thoughts?
> 
> [1] http://lwn.net/Articles/285599/
> 
> Cheers,
> Lucas
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
I think we still are going to multiple flavors for I, e.g.:
https://review.openstack.org/#/c/74762/
On Thu, 2014-02-20 at 08:50 -0500, Jay Dobies wrote:
> 
> On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:
> > Hi.
> >
> > While implementing CRUD operations for node profiles in Tuskar (which
> > are essentially Nova flavors renamed) I encountered editing of flavors
> > and I have some doubts about it.
> >
> > Editing of nova flavors in Horizon is implemented as
> > deleting-then-creating with a _new_ flavor ID.
> > For us it essentially means that all links to flavor/profile (e.g. from
> > overcloud role) will become broken. We had the following proposals:
> > - Update links automatically after editing by e.g. fetching all
> > overcloud roles and fixing flavor ID. Poses risk of race conditions with
> > concurrent editing of either node profiles or overcloud roles.
> >Even worse, are we sure that user really wants overcloud roles to be
> > updated?
> 
> This is a big question. Editing has always been a complicated concept in 
> Tuskar. How soon do you want the effects of the edit to be made live? 
> Should it only apply to future creations or should it be applied to 
> anything running off the old configuration? What's the policy on how to 
> apply that (canary v. the-other-one-i-cant-remember-the-name-for v. 
> something else)?
> 
> > - The same as previous but with confirmation from user. Also risk of
> > race conditions.
> > - Do not update links. User may be confused: operation called "edit"
> > should not delete anything, nor is it supposed to invalidate links. One
> > of the ideas was to show also deleted flavors/profiles in a separate
> > table.
> > - Implement clone operation instead of editing. Shows user a creation
> > form with data prefilled from original profile. Original profile will
> > stay and should be deleted manually. All links also have to be updated
> > manually.
> > - Do not implement editing, only creating and deleting (that's what I
> > did for now in https://review.openstack.org/#/c/73576/ ).
> 
> I'm +1 on not implementing editing. It's why we wanted to standardize on 
> a single flavor for Icehouse in the first place, the use cases around 
> editing or multiple flavors are very complicated.
> 
> > Any ideas on what to do?
> >
> > Thanks in advance,
> > Dmitry Tantsur
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
Hi.

While implementing CRUD operations for node profiles in Tuskar (which
are essentially Nova flavors renamed) I encountered editing of flavors
and I have some doubts about it.

Editing of nova flavors in Horizon is implemented as
deleting-then-creating with a _new_ flavor ID.
For us it essentially means that all links to flavor/profile (e.g. from
overcloud role) will become broken. We had the following proposals:
- Update links automatically after editing by e.g. fetching all
overcloud roles and fixing flavor ID. Poses risk of race conditions with
concurrent editing of either node profiles or overcloud roles.
  Even worse, are we sure that user really wants overcloud roles to be
updated?
- The same as previous but with confirmation from user. Also risk of
race conditions.
- Do not update links. User may be confused: operation called "edit"
should not delete anything, nor is it supposed to invalidate links. One
of the ideas was to show also deleted flavors/profiles in a separate
table.
- Implement clone operation instead of editing. Shows user a creation
form with data prefilled from original profile. Original profile will
stay and should be deleted manually. All links also have to be updated
manually.
- Do not implement editing, only creating and deleting (that's what I
did for now in https://review.openstack.org/#/c/73576/ ).

Any ideas on what to do?

Thanks in advance,
Dmitry Tantsur


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][All] tox 1.7.0 error while running tests

2014-02-11 Thread Dmitry Tantsur
Hi. This seems to be related:
https://bugs.launchpad.net/openstack-ci/+bug/1274135
We also encountered this.

On Tue, 2014-02-11 at 14:56 +0530, Swapnil Kulkarni wrote:
> Hello,
> 
> 
> I created a new devstack environment today and installed tox 1.7.0,
> and getting error "tox.ConfigError: ConfigError: substitution key
> 'posargs' not found".
> 
> 
> Details in [1].
> 
> 
> Anybody encountered similar error before? Any workarounds/updates
> needed?
> 
> 
> [1] http://paste.openstack.org/show/64178/
> 
> 
> 
> 
> Best Regards,
> Swapnil Kulkarni
> irc : coolsvap
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    2   3   4   5   6   7