Re: [openstack-dev] [Neutron] client noauth deprecation

2015-01-07 Thread Miguel Ángel Ajo
Seems like a good reason to keep it, this allows us to test  
internal integration in isolation from keystone.


Miguel Ángel Ajo


On Wednesday, 7 de January de 2015 at 10:05, Assaf Muller wrote:

  
  
 - Original Message -
  The option to disable keystone authentication in the neutron client was
  marked for deprecation in August as part of a Keystone support upgrade.[1]
   
  What was the reason for this? As far as I can tell, Neutron works fine in 
  the
  'noauth' mode and there isn't a lot of code that tightly couples neutron to
  Keystone that I can think of.
   
  
  
 It was actually broken until John fixed it in:
 https://review.openstack.org/#/c/125022/
  
 We plan on using it in the Neutron in-tree full-stack testing. I'd appreciate
 if the functionality was not removed or otherwise broken :)
  
   
  1.
  https://github.com/openstack/python-neutronclient/commit/2203b013fb66808ef280eff0285318ce21d9bc67#diff-ba2e4fad85e66d9aabb6193f222fcc4cR438
   
  --
  Kevin Benton
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposal to add Flavio Percoco to stable-maint-core

2015-01-07 Thread Ihar Hrachyshka

On 01/06/2015 08:32 PM, Adam Gandelman wrote:

Hiya-

Flavio has been actively involved in stable branch maintenance for as 
long as I can remember, but it looks like his +2 abilities were 
removed after the organizational changes made to the stable 
maintenance teams.  He has expressed interest in continuing on with 
general stable maintenance and I think his proven understanding of 
branch policies make him a valuable contributor. I propose we add him 
to the stable-maint-core team.


+2. His involvement in stable branch maintainership is much appreciated.
/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Zhou, Zhenzan
So is it possible to just integrate this project into ironic? I mean when you 
create an ironic node, it will start discover in the background. So we don't 
need two services? 
Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:
 Hi, Dmitry

 I think this is a good project.
 I got one question: what is the relationship with ironic-python-agent?
 Thanks.
Hi!

No relationship right now, but I'm hoping to use IPA as a base for 
introspection ramdisk in the (near?) future.

 BR
 Zhou Zhenzan

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Thursday, December 11, 2014 10:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Ironic] ironic-discoverd status update

 Hi all!

 As you know I actively promote ironic-discoverd project [1] as one of the 
 means to do hardware inspection for Ironic (see e.g. spec [2]), so I decided 
 it's worth to give some updates to the community from time to time. This 
 email is purely informative, you may safely skip it, if you're not interested.

 Background
 ==

 The discoverd project (I usually skip the ironic- part when talking 
 about it) solves the problem of populating information about a node in 
 Ironic database without help of any vendor-specific tool. This 
 information usually includes Nova scheduling properties (CPU, RAM, 
 disk
 size) and MAC's for ports.

 Introspection is done by booting a ramdisk on a node, collecting data there 
 and posting it back to discoverd HTTP API. Thus actually discoverd consists 
 of 2 components: the service [1] and the ramdisk [3]. The service handles 2 
 major tasks:
 * Processing data posted by the ramdisk, i.e. finding the node in Ironic 
 database and updating node properties with new data.
 * Managing iptables so that the default PXE environment for 
 introspection does not interfere with Neutron

 The project was born from a series of patches to Ironic itself after we 
 discovered that this change is going to be too intrusive. Discoverd was 
 actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
 After the Paris summit, we agreed on bringing it closer to the Ironic 
 upstream, and now discoverd is hosted on StackForge and tracks bugs on 
 Launchpad.

 Future
 ==

 The basic feature of discoverd: supply Ironic with properties required for 
 scheduling, is pretty finished as of the latest stable series 0.2.

 However, more features are planned for release 1.0.0 this January [5].
 They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
 MAC's.

 Plugability
 ~~~

 An interesting feature of discoverd is support for plugins, which I prefer to 
 call hooks. It's possible to hook into the introspection data processing 
 chain in 2 places:
 * Before any data processing. This opens opportunity to adopt discoverd to 
 ramdisks that have different data format. The only requirement is that the 
 ramdisk posts a JSON object.
 * After a node is found in Ironic database and ports are created for MAC's, 
 but before any actual data update. This gives an opportunity to alter, which 
 properties discoverd is going to update.

 Actually, even the default logic of update Node.properties is 
 contained in a plugin - see SchedulerHook in 
 ironic_discoverd/plugins/standard.py
 [6]. This plugability opens wide opportunities for integrating with 3rd party 
 ramdisks and CMDB's (which as we know Ironic is not ;).

 Enrolling
 ~

 Some people have found it limiting that the introspection requires power 
 credentials (IPMI user name and password) to be already set. The recent set 
 of patches [7] introduces a possibility to request manual power on of the 
 machine and update IPMI credentials via the ramdisk to the expected values. 
 Note that support of this feature in the reference ramdisk [3] is not ready 
 yet. Also note that this scenario is only possible when using discoverd 
 directly via it's API, not via Ironic API like in [2].

 Get Involved
 

 Discoverd terribly lacks reviews. Out team is very small and 
 self-approving is not a rare case. I'm even not against fast-tracking 
 any existing Ironic core to a discoverd core after a couple of 
 meaningful reviews :)

 And of course patches are welcome, especially plugins for integration with 
 existing systems doing similar things and CMDB's. Patches are accepted via 
 usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not 
 follow the Gerrit spec process right now).

 Finally, please comment on the Ironic spec [2], I'd like to know what you 
 think.

 References
 ==

 [1] https://pypi.python.org/pypi/ironic-discoverd
 [2] 

Re: [openstack-dev] [Fuel] fuel master monitoring

2015-01-07 Thread Przemyslaw Kaminski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hello,

The updated version of monitoring code is available here:

https://review.openstack.org/#/c/137785/

This is based on monit as was agreed in this thread. The drawback of
monit is that basically it's a very simple system that doesn't track
state of checkers so still some Python code is needed so that user
isn't spammed with low disk space notifications every minute.

On 01/05/2015 10:40 PM, Andrew Woodward wrote:
 There are two threads here that need to be unraveled from each 
 other.
 
 1. We need to prevent fuel from doing anything if the OS is out of 
 disk space. It leads to a very broken database from which it 
 requires a developer to reset to a usable state. From this point we
 need to * develop a method for locking down the DB writes so that
 fuel becomes RO until space is freed

It's true that full disk space + DB writes can result in fatal
database failure. I just don't know if we can lock the DB just like
that? What if deployment is in progress?

I think the first way to reduce disk space usage would be to set
logging level to WARNING instead of DEBUG. It's good to have DEBUG
during development but I don't think it's that good for production.
Besides it slows down deployment much, from what I observed.

 * develop a method (or re-use existing) to notify the user that a 
 serious error state exists on the host. ( that could not be 
 dismissed)

Well this is done already in the review I've linked above. It
basically posts a notification to the UI system. Everything still
works as before though until the disk is full. The CLI doesn't
communicate in any way with notifications AFAIK so the warning is not
shown there.

 * we need some API that can lock / unlock the DB * we need some 
 monitor process that will trigger the lock/unlock

This one can be easily changed with the code in the above review request.

 
 2. We need monitoring for the master node and fuel components in 
 general as discussed at length above. unless we intend to use this
  to also monitor the services on deployed nodes (likely bad), then
  what we use to do this is irrelevant to getting this started. If 
 we are intending to use this to also monitor deployed nodes, (again
 bad for the fuel node to do) then we need to standardize with what
 we monitor the cloud with (Zabbix currently) and offer a single
 pane of glass. Federation in the monitoring becomes a critical
 requirement here as having more than one pane of glass is an
 operations nightmare.

AFAIK installation of Zabbix is optional. We want obligatory
monitoring of the master which would somehow force its installation on
the cloud nodes.

P.

 
 Completing #1 is very important in the near term as I have had to 
 un-brick several deployments over it already. Also, in my mind 
 these are also separate tasks.
 
 On Thu, Nov 27, 2014 at 1:19 AM, Simon Pasquier 
 spasqu...@mirantis.com wrote:
 I've added another option to the Etherpad: collectd can do basic
  threshold monitoring and run any kind of scripts on alert 
 notifications. The other advantage of collectd would be the RRD 
 graphs for (almost) free. Of course since monit is already 
 supported in Fuel, this is the fastest path to get something 
 done. Simon
 
 On Thu, Nov 27, 2014 at 9:53 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:
 
 Is it possible to send http requests from monit, e.g for 
 creating notifications? I scanned through the docs and found 
 only alerts for sending mail, also where token (username/pass)
  for monit will be stored?
 
 Or maybe there is another plan? without any api interaction
 
 On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:
 
 This I didn't know. It's true in fact, I checked the 
 manifests. Though monit is not deployed yet because of lack 
 of packages in Fuel ISO. Anyways, I think the argument about
  using yet another monitoring service is now rendered 
 invalid.
 
 So +1 for monit? :)
 
 P.
 
 
 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:
 
 Monit is easy and is used to control states of Compute nodes.
 We can adopt it for master node.
 
 -- Best regards, Sergii Golovatiuk, Skype #golserge IRC 
 #holser
 
 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:
 
 As for me - zabbix is overkill for one node. Zabbix Server
  + Agent + Frontend + DB + HTTP server, and all of it for 
 one node? Why not use something that was developed for 
 monitoring one node, doesn't have many deps and work out of
 the box? Not necessarily Monit, but something similar.
 
 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:
 
 We want to monitor Fuel master node while Zabbix is only
  on slave nodes and not on master. The monitoring service
  is supposed to be installed on Fuel master host (not 
 inside a Docker container) and provide basic info about 
 free disk space, etc.
 
 P.
 
 
 On 11/26/2014 02:58 PM, Jay Pipes wrote:
 
 On 

Re: [openstack-dev] [Keystone] Should region name be case insensitive?

2015-01-07 Thread Denis Makogon
Hello Zhou.

On Wed, Jan 7, 2015 at 10:39 AM, Zhou, Zhenzan zhenzan.z...@intel.com
wrote:

 Hi,



 I meet such an issue when using glance/nova client deployed with Devstack
 to talk with a cloud deployed with TripleO:



 [minicloud@minicloud allinone]$ glance image-list

 public endpoint for image service in RegionOne region not found



Both glance/nova python client libraries allows users to specify region
name (see http://docs.openstack.org/user-guide/content/sdk_auth_nova.html).
So, you are free to metion any region you want.


  The reason is that Devstack uses “RegionOne” as default but TripleO uses
 “regionOne” and

 keystoneclient/service_catalog.py: get_endpoints() does a case sensitive
 string compare.



 I’m not a DB expert but normally database does case insensitive collation,
 so should we use do case insensitive compare here?

 Thanks a lot.



 BR

 Zhou Zhenzan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Dmitry Tantsur

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean when you 
create an ironic node, it will start discover in the background. So we don't 
need two services?
Well, the decision on the summit was that it's better to keep it 
separate. Please see https://review.openstack.org/#/c/135605/ for 
details on future interaction between discoverd and Ironic.



Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for 
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of the means 
to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's 
worth to give some updates to the community from time to time. This email is 
purely informative, you may safely skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the ironic- part when talking
about it) solves the problem of populating information about a node in
Ironic database without help of any vendor-specific tool. This
information usually includes Nova scheduling properties (CPU, RAM,
disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting data there and 
posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 
components: the service [1] and the ramdisk [3]. The service handles 2 major 
tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
After the Paris summit, we agreed on bringing it closer to the Ironic upstream, 
and now discoverd is hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties required for 
scheduling, is pretty finished as of the latest stable series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I prefer to 
call hooks. It's possible to hook into the introspection data processing chain 
in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd to 
ramdisks that have different data format. The only requirement is that the 
ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for MAC's, but 
before any actual data update. This gives an opportunity to alter, which 
properties discoverd is going to update.

Actually, even the default logic of update Node.properties is
contained in a plugin - see SchedulerHook in
ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with 3rd party 
ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent set of 
patches [7] introduces a possibility to request manual power on of the machine 
and update IPMI credentials via the ramdisk to the expected values. Note that 
support of this feature in the reference ramdisk [3] is not ready yet. Also 
note that this scenario is only possible when using discoverd directly via it's 
API, not via Ironic API like in [2].

Get Involved


Discoverd terribly lacks reviews. Out team is very small and
self-approving is not a rare case. I'm even not against fast-tracking
any existing Ironic core to a discoverd core after a couple of
meaningful reviews :)

And of course patches are welcome, especially plugins for integration with 
existing systems doing similar things and CMDB's. Patches are accepted via 
usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not 
follow the Gerrit spec process right now).

Finally, please comment on the Ironic spec 

Re: [openstack-dev] [Neutron] client noauth deprecation

2015-01-07 Thread Assaf Muller


- Original Message -
 The option to disable keystone authentication in the neutron client was
 marked for deprecation in August as part of a Keystone support upgrade.[1]
 
 What was the reason for this? As far as I can tell, Neutron works fine in the
 'noauth' mode and there isn't a lot of code that tightly couples neutron to
 Keystone that I can think of.
 

It was actually broken until John fixed it in:
https://review.openstack.org/#/c/125022/

We plan on using it in the Neutron in-tree full-stack testing. I'd appreciate
if the functionality was not removed or otherwise broken :)

 
 1.
 https://github.com/openstack/python-neutronclient/commit/2203b013fb66808ef280eff0285318ce21d9bc67#diff-ba2e4fad85e66d9aabb6193f222fcc4cR438
 
 --
 Kevin Benton
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder bug commit] How to commit a bug which depends on another bug that have not been merged?

2015-01-07 Thread Denis Makogon
Hello liuxinguo.

On Wed, Jan 7, 2015 at 9:13 AM, liuxinguo liuxin...@huawei.com wrote:

  Hi all,



 · I have commit a bug and it is not yet merged. Now I want to
 commit another bug but this bug is depends on the previous one which have
 not been merged?

 · So how should I do? Should I commit the latter bug directly or
 wait the previous bug to be merged?




You are free to make dependent commits. More info you can find @
https://ask.openstack.org/en/question/31633/gerrit-best-way-to-make-a-series-of-dependent-commits/


  Any input will be appreciated, thanks!



 liu



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Questions about the Octavia project

2015-01-07 Thread Andrew Hutchings
Hi Phillip,

Thanks for your response.

 On 6 Jan 2015, at 20:33, Phillip Toohill phillip.tooh...@rackspace.com 
 wrote:
 
 Ill answer inline what I can, others can chime in to clear up anything and
 answer the rest.

The reason I asked the questions I did is because I can’t find any OS the docs 
will actually compile on and it is difficult to find these answers trawling 
throw .dot and .rst files.  I’ve since found answers for a couple of them.

I have several recommendations based on what I have read so far.  Such as not 
using Protobufs instead of JSON for the Amphorae-Controller configuration 
communication (I can go into lots of detail into why another time).  I very 
much like the HMAC-signed UDP messages idea though.

I now have some feedback for my team, thanks again.

Kind Regards
--
Andrew Hutchings - LinuxJedi - http://www.linuxjedi.co.uk/





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-07 Thread Radomir Dopieralski
On 06/01/15 18:39, Lin Hua Cheng wrote:
 Radomir,
 
 The current version of Angular were using in Horizon still does not have
 cookie and mock
 packages: 
 https://github.com/stackforge/xstatic-angular/tree/1.2.1.1/xstatic/pkg/angular/data
 
 We still need to do it the long way:
 1. Update the Angular version in global-requirements
 2. Wait till it gets merge and propagate to horizon requirements
 3. Remove references loading of mock and cookie packages in horizon and
 horizon requirement
 4. Remove mock and cookie from global-requirements.

That's strange, I thought that we use 1.2.16 already. Sorry for my mistake.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] DVR Meeting is cancelled for this week.

2015-01-07 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
The DVR Meeting will be cancelled this week.
If there is any agenda we can talk during the L3 sub-team meeting.

Thanks.
Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Should region name be case insensitive?

2015-01-07 Thread Zhou, Zhenzan
Hi,

I meet such an issue when using glance/nova client deployed with Devstack to 
talk with a cloud deployed with TripleO:

[minicloud@minicloud allinone]$ glance image-list
public endpoint for image service in RegionOne region not found

The reason is that Devstack uses “RegionOne” as default but TripleO uses 
“regionOne” and
keystoneclient/service_catalog.py: get_endpoints() does a case sensitive string 
compare.

I’m not a DB expert but normally database does case insensitive collation, so 
should we use do case insensitive compare here?
Thanks a lot.

BR
Zhou Zhenzan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Switching CI back to amd64

2015-01-07 Thread Derek Higgins
Hi All,
I intended to bring this up at this mornings meeting but the train I
was on had no power sockets (and I had no battery) so sending to the
list instead.

We currently run our CI with on images built for i386, we took this
decision a while back to save memory ( at the time is allowed us to move
the amount of memory required in our VMs from 4G to 2G (exactly where in
those bands the hard requirements are I don't know)

Since then we have had to move back to 3G for the i386 VM as 2G was no
longer enough so the saving in memory is no longer as dramatic.

Now that the difference isn't as dramatic, I propose we switch back to
amd64 (with 4G vms) in order to CI on what would be closer to a
production deployment and before making the switch wanted to throw the
idea out there for others to digest.

This obviously would impact our capacity as we will have to reduce the
number of testenvs per testenv hosts. Our capacity (in RH1 and roughly
speaking) allows us to run about 1440 ci jobs per day. I believe we can
make the switch and still keep capacity above 1200 with a few other changes
1. Add some more testenv hosts, we have 2 unused hosts at the moment and
we can probably take 2 of the compute nodes from the overcloud.
2. Kill VM's at the end of each CI test (as opposed to leaving them
running until the next CI test kills them), allowing us to more
successfully overcommit on RAM
3. maybe look into adding swap on the test env hosts, they don't
currently have any, so over committing RAM is a problem the the OOM
killer is handling from time to time (I only noticed this yesterday).

The other benefit to doing this is that is we were to ever want to CI
images build with packages (this has come up in previous meetings) we
wouldn't need to provide i386 packages just for CI, while the rest of
the world uses the amd64.

Thanks,
Derek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] release notes for 2014.2.2

2015-01-07 Thread Ihar Hrachyshka

Hi,

FYI I've created draft release notes for 2014.2.2:
https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.2

I assume that Trove will be released for 2014.2.2, so I've added it to 
the list of projects.


Feel free to add more notes there.

/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposal to add Flavio Percoco to stable-maint-core

2015-01-07 Thread Dave Walker
+2

Flavio seems to have a good understanding of stable branch and has a
good history of reviews.

Thanks.

On 6 January 2015 at 19:32, Adam Gandelman ad...@ubuntu.com wrote:
 Hiya-

 Flavio has been actively involved in stable branch maintenance for as long
 as I can remember, but it looks like his +2 abilities were removed after the
 organizational changes made to the stable maintenance teams.  He has
 expressed interest in continuing on with general stable maintenance and I
 think his proven understanding of branch policies make him a valuable
 contributor. I propose we add him to the stable-maint-core team.

 Cheers,
 Adam

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [sdk] Proposal to achieve consistency in client side sorting

2015-01-07 Thread Sean Dague
On 01/06/2015 09:37 PM, Rochelle Grober wrote:
 Steven,
 
  
 
 This sounds like a perfect place for a cross project spec.  It wouldn’t
 have to be a big one, but all the projects would have a chance to review
 and the TC would oversee to ensure it gets proper review.
 
  
 
 TCms, am I on point here?

Yes, this sounds reasonable. It would be a general CLI guidelines spec
which we could expand over time to include common patterns that we
prefer CLIs use when interfacing with their users.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Integration tests on the gate

2015-01-07 Thread Julie Pichon
On 25/11/14 13:55, Julie Pichon wrote:
 Hi folks,
 
 You may have noticed a new job in the check queue for Horizon patches,
 gate-horizon-dsvm-integration. This job runs the integration tests suite
 from our repository. [1]
 
 The job is marked as non-voting but *it is meant to pass.* The plan is
 to leave it that way for a couple of weeks to make sure that it is
 stable. After that it will become a voting job.

After delaying due to an intermittent bug [1], it seems like the bug
stopped occurring (from checking now as well as before the holidays), so
I submitted the patch to switch the job to voting [2].

Cheers,

Julie

 [1] https://bugs.launchpad.net/horizon/+bug/1396194
 [2] https://review.openstack.org/#/c/145477/

 
 What to do if the job fails
 ---
 
 If you notice a failure, please look at the logs and make sure that it
 wasn't caused by your patch. If it doesn't look related or if you're not
 sure how to interpret the results, please ask on #openstack-horizon or
 reply to this thread. We really want to avoid people getting used to the
 job failing, getting annoyed at it and/or blindly rechecking. If there
 are any intermittent or strange issue we'll postpone making the job
 voting, but we need to know about it so we can investigate and fix them.
 
 How to help
 ---
 
 If you'd like to help, you're very welcome to do so either by reviewing
 new tests [2] or writing more of them [3]. As with everywhere else, all
 help is very welcome and review attention is particularly appreciated!
 
 I'm really looking forward to having the integration tests be part of
 the main voting gate and for us improve the coverage. I'd really like to
 thank in particular Daniel Korn and Tomáš Nováčik for their huge efforts
 on these tests over the past year.
 
 Thanks,
 
 Julie
 
 
 [1]
 https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/integration_tests
 [2] https://wiki.openstack.org/wiki/Horizon/Testing/UI#Writing_a_test
 [3]
 https://review.openstack.org/#/q/project:openstack/horizon+file:%255E.*/integration_tests/.*+status:open,n,z
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2015-01-07 Thread Tomasz Napierala
Hi Andrew and all!


 On 05 Jan 2015, at 22:05, Andrew Woodward xar...@gmail.com wrote:
 
 On Tue, Nov 25, 2014 at 5:21 AM, Bartosz Kupidura
 bkupid...@mirantis.com wrote:
 
 Hello All,
 
 Im working on Zabbix implementation which include HA support.
 
 Zabbix server should be deployed on all controllers in HA mode.
 
 This needs to be discouraged as much as putting mongo-db on the controllers.

We know that, and we can use UI warning for that. For the reasons Mike provided 
our users need it. 

 
 
 When zabbix component is enabled, we will install zabbix-server on all 
 controllers
 in active-backup mode (pacemaker+haproxy).
 
 Again, not forced on controllers, this is very bad.
 
 
 Controllers:
 
 While there is development use cases to deploy monitoring on combined
 controllers, and it can make use of the already existing pacemaker
 cluster, this is the wrong direction to point users. There are many
 reasons this is bad: for one, monitoring can become quite loaded, and
 as we've seen secondary load on the controllers can collapse the
 entire control plane. Secondly running monitoring on the cluster may
 also result in the monitoring going offline if the cluster does, from
 my own experience, not being able to see your monitoring is nearly
 worse than having everything down and leads to lost precious moments
 of downtime SLA.
 
 HA Scaling:
 
 Just like with controllers, our other HA components need to support a
 scale of 1 to N. This is important as a cluster will need to scale, or
 as the operator moves from POC to Production, they can deploy more
 hardware. This also helps alleviate some of the not enough nodes
 issues mentioned in the thread already


Your concenrs are 100% valid and I agree with them. But what about small 
instalaltions, where only 4 physical machines are available? We are already 
wasting one for Fuel node, and 3 for controllers. There is hardware with 
similar setup and it seems to be very popular. This is what we are trying to 
address.


Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposal to add Flavio Percoco to stable-maint-core

2015-01-07 Thread Thierry Carrez
Adam Gandelman wrote:
 Flavio has been actively involved in stable branch maintenance for as
 long as I can remember, but it looks like his +2 abilities were removed
 after the organizational changes made to the stable maintenance teams. 
 He has expressed interest in continuing on with general stable
 maintenance and I think his proven understanding of branch policies make
 him a valuable contributor. I propose we add him to the
 stable-maint-core team.

+1

Flavio showed a good grasp of stable branch policy in the past, has a
cross-project focus and has indicated interest in pursuing stable-branch
project-wide in the future.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Should region name be case insensitive?

2015-01-07 Thread Sean Dague
On 01/07/2015 06:36 AM, James Downs wrote:
 
 On Jan 7, 2015, at 12:59 AM, Denis Makogon dmako...@mirantis.com
 mailto:dmako...@mirantis.com wrote:
 

 Hello Zhou.

 On Wed, Jan 7, 2015 at 10:39 AM, Zhou, Zhenzan zhenzan.z...@intel.com
 mailto:zhenzan.z...@intel.com wrote:

 Hi, 

 __ __

 I meet such an issue when using glance/nova client deployed with
 Devstack to talk with a cloud deployed with TripleO:

 __ __

 [minicloud@minicloud allinone]$ glance image-list

 public endpoint for image service in RegionOne region not found

 __ 

 Both glance/nova python client libraries allows users to specify
 region name
 (see http://docs.openstack.org/user-guide/content/sdk_auth_nova.html).
 So, you are free to metion any region you want.
 
 That’s true, but the OP was asking whether the region name should be
 case sensitive or not. 
 
 I think it probably makes sense that regionOne should be the same as
 RegionONE, or RegionOne.

The general standard in OpenStack has been case sensitivity. There are
performance and security implications on case insensitive environments.

It just sounds like tripleo is using a bad default here, and that's what
should be addressed.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposal to add Flavio Percoco to stable-maint-core

2015-01-07 Thread Alan Pevec
+2 Flavio knows stable branch policies very well and will be a good
addition to the cross-projects stable team.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [sdk] Proposal to achieve consistency in client side sorting

2015-01-07 Thread Anne Gentle
On Tue, Jan 6, 2015 at 8:05 PM, Everett Toews everett.to...@rackspace.com
wrote:

 On Jan 6, 2015, at 12:46 PM, Kevin L. Mitchell 
 kevin.mitch...@rackspace.com wrote:

  On Tue, 2015-01-06 at 12:19 -0600, Anne Gentle wrote:
  I'm all for consistency. Sounds like a great case for the API Working
  Group to document. You can propose a patch describing the way we want
  sorting to work.
 
 
  See https://review.openstack.org/#/q/project:openstack/api-wg,n,z
 
  I really think that the API WG should be responsible for the REST API
  only, TBH, and maybe for the Pythonic APIs.  Once we start talking about
  CLI options, I think that's outside the API WG's perview, and we
  probably should have that be up to CLI authors.  My thinking is that a
  REST API and a Python API are both used by developers, where we have one
  set of conventions; but when you start talking about CLI, you're really
  talking about UX, and the rules there can be vastly different.

 Agreed. The scope [1] of the API WG is the HTTP (REST) API.

 We won’t be touching any language SDKs (one of which is referred to as
 Pythonic APIs above) or any CLIs.


Ah, yes, my apologies. I had mistakenly thought these were sorts for the
API.

Yes, I agree this has the potential for a nice cross-project spec.

Anne



 Thanks,
 Everett

 [1] https://wiki.openstack.org/wiki/API_Working_Group#Scope
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Dmitry Tantsur

On 01/07/2015 03:44 PM, Matt Keenan wrote:

On 01/07/15 14:24, Kumar, Om (Cloud OS RD) wrote:

If it's a separate project, can it be extended to perform out of band
discovery too..? That way there will be a single service to perform
in-band as well as out of band discoveries.. May be it could follow
driver framework for discovering nodes, where one driver could be
native (in-band) and other could be iLO specific etc...



I believe the following spec outlines plans for out-of-band discovery:
   https://review.openstack.org/#/c/100951/
Right, so Ironic will have drivers, one of which (I hope) will be a 
driver for discoverd.




No idea what the progress is with regard to implementation within the
Kilo cycle though.

For now we hope to get it merged in K.



cheers

Matt


Just a thought.

-Om

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 14:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean
when you create an ironic node, it will start discover in the
background. So we don't need two services?

Well, the decision on the summit was that it's better to keep it
separate. Please see https://review.openstack.org/#/c/135605/ for
details on future interaction between discoverd and Ironic.


Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one
of the means to do hardware inspection for Ironic (see e.g. spec
[2]), so I decided it's worth to give some updates to the community
from time to time. This email is purely informative, you may safely
skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the ironic- part when talking
about it) solves the problem of populating information about a node
in Ironic database without help of any vendor-specific tool. This
information usually includes Nova scheduling properties (CPU, RAM,
disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting
data there and posting it back to discoverd HTTP API. Thus actually
discoverd consists of 2 components: the service [1] and the ramdisk
[3]. The service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in
Ironic database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself after
we discovered that this change is going to be too intrusive.
Discoverd was actively tested as part of Instack [4] and it's RPM is
a part of Juno RDO. After the Paris summit, we agreed on bringing it
closer to the Ironic upstream, and now discoverd is hosted on
StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties
required for scheduling, is pretty finished as of the latest stable
series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size
and NIC MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I
prefer to call hooks. It's possible to hook into the introspection
data processing chain in 2 places:
* Before any data processing. This opens opportunity to adopt
discoverd to ramdisks that have different data format. The only
requirement is that the ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for
MAC's, but before any actual data update. This gives an opportunity
to alter, which properties discoverd is going to update.

Actually, even the default logic of update Node.properties is
contained in a plugin - see SchedulerHook in
ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with
3rd party ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires
power 

[openstack-dev] [TripleO] default region name

2015-01-07 Thread Zhou, Zhenzan
Hi, 

Does anyone know why TripleO uses regionOne as default region name? A comment 
in the code says it's the default keystone uses. 
But I cannot find any regionOne in keystone code. Devstack uses RegionOne 
by default and I do see lots of RegionOne in keystone code.

stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne *
scripts/register-endpoint:26:REGION=regionOne # NB: This is the default 
keystone uses.
scripts/register-endpoint:45:echo -r, --region  -- Override the 
default region 'regionOne'.
scripts/setup-endpoints:33:echo -r, --region-- Override 
the default region 'regionOne'.
scripts/setup-endpoints:68:REGION=regionOne #NB: This is the keystone default.
stack@u140401:~/openstack/tripleo-incubator$ grep -rn regionOne 
../tripleo-heat-templates/
stack@u140401:~/openstack/tripleo-incubator$  grep -rn regionOne 
../tripleo-image-elements/
../tripleo-image-elements/elements/tempest/os-apply-config/opt/stack/tempest/etc/tempest.conf:10:region
 = regionOne
../tripleo-image-elements/elements/neutron/os-apply-config/etc/neutron/metadata_agent.ini:3:auth_region
 = regionOne
stack@u140401:~/openstack/keystone$ grep -rn RegionOne * | wc -l
130
stack@u140401:~/openstack/keystone$ grep -rn regionOne * | wc -l
0

Another question is that TripleO doesn't export OS_REGION_NAME in stackrc.  So 
when someone source devstack rc file 
to do something and then source TripleO rc file again, the OS_REGION_NAME will 
be the one set by devstack rc file. 
I know this may be strange but isn't it better to use the same default value?

Thanks a lot.

BR
Zhou Zhenzan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] client noauth deprecation

2015-01-07 Thread John Schwarz
Adding to what Miguel said, I've merged a few months back a patch [1]
which actually fixed the noauth feature in favour of the full-stack
effort (the full-stack patches use neutronclient in noauth mode for easy
access to neutron-server). We'll probably continue to use neutronclient
until some other alternative is mature enough (for example, API testing
providing thorough interface to neutron-server).

Needless to say that noauth is currently working just fine and if it
were to stop working I'll be very sad ;-)

[1]: https://review.openstack.org/#/c/125022/

On 01/07/2015 11:44 AM, Miguel Ángel Ajo wrote:
 Seems like a good reason to keep it, this allows us to test 
 internal integration in isolation from keystone.
 
 Miguel Ángel Ajo
 
 On Wednesday, 7 de January de 2015 at 10:05, Assaf Muller wrote:
 


 - Original Message -
 The option to disable keystone authentication in the neutron client was
 marked for deprecation in August as part of a Keystone support
 upgrade.[1]

 What was the reason for this? As far as I can tell, Neutron works
 fine in the
 'noauth' mode and there isn't a lot of code that tightly couples
 neutron to
 Keystone that I can think of.

 It was actually broken until John fixed it in:
 https://review.openstack.org/#/c/125022/

 We plan on using it in the Neutron in-tree full-stack testing. I'd
 appreciate
 if the functionality was not removed or otherwise broken :)


 1.
 https://github.com/openstack/python-neutronclient/commit/2203b013fb66808ef280eff0285318ce21d9bc67#diff-ba2e4fad85e66d9aabb6193f222fcc4cR438

 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-07 Thread Miguel Ángel Ajo
Totally correct, that’s what I was meaning with “will remain active”
but “unmanaged”.

Yes, it would be good to have something to tell the schedulers to ban a host.  

Miguel Ángel Ajo


On Thursday, 8 de January de 2015 at 00:52, Kevin Benton wrote:

 The problem is that if you just stop the service, it not only removes it from 
 scheduling, but it stops it from receiving updates to floating IP changes, 
 interface changes, etc. I think it would be nice to have a way to explicitly 
 stop it from being scheduled new routers, but still act as a functioning L3 
 agent otherwise.
  
 On Wed, Jan 7, 2015 at 3:30 PM, Miguel Ángel Ajo majop...@redhat.com 
 (mailto:majop...@redhat.com) wrote:
  You can stop the neutron-dhcp-agent or neutron-l3-agent,  
  the agents should go not-active after reporting timeout.
   
  The actual network services (routers, dhcp, etc) will stay
  active into the node, but unmanaged. In some cases,
  if you have automatic rescheduling of the resources
  configured, those will be spawned on other hosts.
   
  Depending on your use case this will be enough or not.
  It’s intended for upgrades and maintenance. But not
  for controlling resources in a node.
   
   
  Miguel Ángel Ajo
   
   
  On Thursday, 8 de January de 2015 at 00:20, Itsuro ODA wrote:
   
   Carl,

   Thank you for your comment.

   It seems there is no clear opinion about whether bug report or
   buleprint is better.  
   So I submitted a bug report for the moment so that the requirememt
   is not forgotten.
   https://bugs.launchpad.net/neutron/+bug/1408488

   Thanks.
   Itsuro Oda

   On Tue, 6 Jan 2015 09:05:19 -0700
   Carl Baldwin c...@ecbaldwin.net (mailto:c...@ecbaldwin.net) wrote:

Itsuro,
 
It would be desirable to be able to be hide an agent from scheduling
but no one has stepped up to make this happen. Come to think of it,
I'm not sure that a bug or blueprint has been filed yet to address it
though it is something that I've wanted for a little while now.
 
Carl
 
On Mon, Jan 5, 2015 at 4:13 PM, Itsuro ODA o...@valinux.co.jp 
(mailto:o...@valinux.co.jp) wrote:
 Neutron experts,
  
 I want to stop scheduling to a specific {dhcp|l3}_agent without
 stopping router/dhcp services on it.
 I expected setting admin_state_up of the agent to False is met
 this demand. But this operation stops all services on the agent
 in actuality. (Is this behavior intended ? It seems there is no
 document for agent API.)
  
 I think admin_state_up of agents should affect only scheduling.
 If it is accepted I will submit a bug report and make a fix.
  
 Or should I propose a blueprint for adding function to stop
 agent's scheduling without stopping services on it ?
  
 I'd like to hear neutron experts' suggestions.
  
 Thanks.
 Itsuro Oda
 --
 Itsuro ODA o...@valinux.co.jp (mailto:o...@valinux.co.jp)
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
 (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
(mailto:OpenStack-dev@lists.openstack.org)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


   --  
   Itsuro ODA o...@valinux.co.jp (mailto:o...@valinux.co.jp)


   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



   
   
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
  
 --  
 Kevin Benton  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Proposing to add Lin-Hua Cheng to horizon-stable-maint

2015-01-07 Thread Matthias Runge
Hello,

I'd like to propose to add Lin-Hua Cheng to horizon-stable-maint.

Lin has been a Horizon Core for a long time and has expressed interest
in helping out with horizon stable reviews.

I think, he'll make a great addition!

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Kumar, Om (Cloud OS RD)
My understanding of discovery was to get all details for a node and then 
register that node to ironic. i.e. Enrollment of the node to ironic. Pardon me 
if it was out of line with your understanding of discovery.

What I understand from the below mentioned spec is that the Node is registered, 
but the spec will help ironic discover other properties of the node.

-Om

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: 07 January 2015 20:20
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 03:44 PM, Matt Keenan wrote:
 On 01/07/15 14:24, Kumar, Om (Cloud OS RD) wrote:
 If it's a separate project, can it be extended to perform out of band 
 discovery too..? That way there will be a single service to perform 
 in-band as well as out of band discoveries.. May be it could follow 
 driver framework for discovering nodes, where one driver could be 
 native (in-band) and other could be iLO specific etc...


 I believe the following spec outlines plans for out-of-band discovery:
https://review.openstack.org/#/c/100951/
Right, so Ironic will have drivers, one of which (I hope) will be a driver for 
discoverd.


 No idea what the progress is with regard to implementation within the 
 Kilo cycle though.
For now we hope to get it merged in K.


 cheers

 Matt

 Just a thought.

 -Om

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: 07 January 2015 14:34
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

 On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:
 So is it possible to just integrate this project into ironic? I mean 
 when you create an ironic node, it will start discover in the 
 background. So we don't need two services?
 Well, the decision on the summit was that it's better to keep it 
 separate. Please see https://review.openstack.org/#/c/135605/ for 
 details on future interaction between discoverd and Ironic.

 Just a thought, thanks.

 BR
 Zhou Zhenzan

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Monday, January 5, 2015 4:49 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

 On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:
 Hi, Dmitry

 I think this is a good project.
 I got one question: what is the relationship with ironic-python-agent?
 Thanks.
 Hi!

 No relationship right now, but I'm hoping to use IPA as a base for 
 introspection ramdisk in the (near?) future.

 BR
 Zhou Zhenzan

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Thursday, December 11, 2014 10:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Ironic] ironic-discoverd status update

 Hi all!

 As you know I actively promote ironic-discoverd project [1] as one 
 of the means to do hardware inspection for Ironic (see e.g. spec 
 [2]), so I decided it's worth to give some updates to the community 
 from time to time. This email is purely informative, you may safely 
 skip it, if you're not interested.

 Background
 ==

 The discoverd project (I usually skip the ironic- part when 
 talking about it) solves the problem of populating information 
 about a node in Ironic database without help of any vendor-specific 
 tool. This information usually includes Nova scheduling properties 
 (CPU, RAM, disk
 size) and MAC's for ports.

 Introspection is done by booting a ramdisk on a node, collecting 
 data there and posting it back to discoverd HTTP API. Thus actually 
 discoverd consists of 2 components: the service [1] and the ramdisk 
 [3]. The service handles 2 major tasks:
 * Processing data posted by the ramdisk, i.e. finding the node in 
 Ironic database and updating node properties with new data.
 * Managing iptables so that the default PXE environment for 
 introspection does not interfere with Neutron

 The project was born from a series of patches to Ironic itself 
 after we discovered that this change is going to be too intrusive.
 Discoverd was actively tested as part of Instack [4] and it's RPM 
 is a part of Juno RDO. After the Paris summit, we agreed on 
 bringing it closer to the Ironic upstream, and now discoverd is 
 hosted on StackForge and tracks bugs on Launchpad.

 Future
 ==

 The basic feature of discoverd: supply Ironic with properties 
 required for scheduling, is pretty finished as of the latest stable 
 series 0.2.

 However, more features are planned for release 1.0.0 this January [5].
 They go beyond the bare minimum of finding out CPU, RAM, disk size 
 and NIC MAC's.

 Plugability
 ~~~

 An interesting feature of discoverd is support for plugins, which I 
 prefer to call hooks. It's possible to hook into the introspection 
 data processing chain in 2 places:
 * Before any data processing. This opens opportunity to 

Re: [openstack-dev] [Manila]Rename driver mode

2015-01-07 Thread Ben Swartzlander


On 01/07/2015 09:20 PM, Li, Chen wrote:


Update my proposal again:

As a new bird for manila, I start using/learning manila with generic 
driver. When I reached driver mode,I became really confuing, because I 
can't stop myself jump into ideas:   share server == nova instance   
svm == share virtual machine == nova instance.


Then I tried glusterFS, it is working under single_svm_mode, I asked 
why it is single mode, the answer I get is  This is approach 
without usage of share-servers  ==  without using share-servers, 
then why single ??? More confusing ! :(


Now I know, the mistake I made is ridiculous.

Great thanks to vponomaryov  ganso, they made big effort helping me 
to figure out why I'm wrong.


But, I don't think I'm the last one person making this mistake.

So, I hope we can change the driver mode name less confusing and more 
easy to understand.


First, svm should be removed, at least change it to ss 
(share-server), make it consistent with share-server.


I don't like single/multi, because that makes me think of numbers of 
share-servers, makes me want to ask: if I create a share, that share 
need multi share-servers ? why ?





I agree the names we went with aren't the most obvious, and I'm open to 
changing them. Share-server is the name we have for virtual machines 
created by manila drivers so a name that refers to share servers rather 
than svms could make more sense.


Also, when I trying glusterFS (installed it following 
http://www.gluster.org/community/documentation/index.php/QuickStart), 
when I testing the GlusterFS volume, it said: use one of the servers 
to mount the volume. Isn't that means using any server in the cluster 
can work and their work has no difference. So, is there a way to 
change glusterFS driver to add more than one glusterfs_target, and 
all glusterfs_targets are replications for each other. Then when 
manila create a share, chose one target to use. This would distribute 
data traffic to the cluster, higher bandwidth, higher performance, 
right ? == This is single_svm_mode, but obviously not single.


vponomaryov  ganso suggested basic_mode and advanced_mode, but I 
think basic/advanced is more driver perspective concept. Different 
driver might already have its own concept of basic advanced, beyong 
manila scope. This would make admin  driver programmer confusing.




I really do not like basic/advanced. I think you summarized one reason 
why it's a bad choice. The relevant difference between the modes is 
whether the driver is able to create tenant-specific instances of a 
share filesystem server or whether tenants share access to a single server.


As single_svm_mode indicate driver just have information about 
where to go and how, it is gotten by config opts and some special 
actions of drivers while multi_svm_mode need to create where and 
how with infomation.


My suggestion is

single_svm_mode == static_mode

multi_svm_mode  == dynamic_mode.

As where to go and how are static under single_svm_mode, but 
dynamically create/delete by manila under multi_svm_mode.\




Static/dynamic is better than basic/advanced, but I still think we can 
do better. I will think about it and try to come up with another idea 
before the meeting tomorrow.



Also, about the share-server concept.

share-server is a tenant point of view concept, it does not know if 
it is a VM or a dedicated hardware outside openstack because it is not 
visible to the tenant.


Each share has its own share-server, no matter how it get(get from 
configuration under single_svm_mode, get from manila under 
multi_svm_mode).




I think I understand what you mean here, but in a more technical sense, 
share servers are something we hide from the tenant. When a tenant asks 
for a share to be created, it might get created on a server that already 
exists, or a new one might get created. The tenant has no control over 
this, and ideally shouldn't even know which decision manila made. The 
only thing we promise to the tenant is that they'll get a share. The 
intent of this design is to offer maximum flexibility to the driver 
authors, and to accommodate the widest variety of possible storage 
controller designs, without causing details about the backends to leak 
through the API layer and break the primary goal of Manila which is to 
provide a standardized API regardless of what the actual implementation is.


We need to keep the above goals in mind when making decisions about 
share servers.


I get the wrong idea that about glusterFS has no share server based on 
https://github.com/openstack/manila/blob/master/manila/share/manager.py#L238, 
without reading driver code, isn't this saying: I create share without 
share-server. But, the truth is just share-server is not handled by 
manila, doesn't mean it not exist. E.g. in glusterFS, the share-server 
is self.gluster_address.


So, I suggest to edit ShareManager code to get share_server before 
create_share based on driver mode.


Such as:


[openstack-dev] 答复: Re: [neutron][AdvancedServices] Confusion about the solution of the service chaining!

2015-01-07 Thread lv . erchun
Hi Sumit,

thanks for your reply, one more question,

If I just using the 'group-based-policy-service-chaining' to developing 
the service chaining feuture, how to map the network service in the 
neutron to the GBP model, because all the network service we implemented 
are based on neutron model, but the 'group-based-policy-service-chaining' 
setup the service chaining based on GBP model, so how can we setup the 
service chaining for network services based the neutron model using the 
'group-based-policy-service-chaining' ?

BR
Alan




发件人: Sumit Naiksatam sumitnaiksa...@gmail.com
收件人: OpenStack Development Mailing List (not for usage 
questions) openstack-dev@lists.openstack.org, 
日期:   2015/01/08 10:46
主题:   Re: [openstack-dev] [neutron][AdvancedServices] Confusion about 
the solution of the service chaining!



Hi Alan,

Responses inline...

On Wed, Jan 7, 2015 at 4:25 AM,  lv.erc...@zte.com.cn wrote:
 Hi,

 I want to confirm that how is the project about Neutron Services 
Insertion,
 Chaining, and Steering going, I found that all the code implementation
 about service insertion、service chaining and traffic steering list in
 JunoPlan were Abandoned .

 https://wiki.openstack.org/wiki/Neutron/AdvancedServices/JunoPlan

 and I also found that we have a new project about GBP and
 group-based-policy-service-chaining be located at:

 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-abstraction


 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining


 so I'm confused with solution of the service chaining.


Yes, the above two blueprints have been implemented and are available
for consumption today as a part of the Group-based Policy codebase and
release. The GBP model uses a policy trigger to drive the service
composition and can accommodate different rendering policies like
realization using NFV SFC.

 We are developing the service chaining feature, so we need to know which 
one
 is the neutron's choice.

It would be great if you can provide feedback on the current
implementation, and perhaps participate and contribute as well.

 Are the blueprints about the service insertion,
 service chaining and traffic steering list in JunoPlan all Abandoned ?


Some aspects of this are perhaps a good fit in Neutron and others are
not. We are looking forward to continuing the discussion on this topic
on the areas which are potentially a good fit for Neutron (we have had
this discussion before as well).

 BR
 Alan



 
 ZTE Information Security Notice: The information contained in this mail 
(and
 any attachment transmitted herewith) is privileged and confidential and 
is
 intended for the exclusive use of the addressee(s).  If you are not an
 intended recipient, any disclosure, reproduction, distribution or other
 dissemination or use of the information contained is strictly 
prohibited.
 If you have received this mail in error, please delete it and notify us
 immediately.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-07 Thread Kevin Benton
If the new requirement is expressed in the neutron packages for the distro,
wouldn't it be transparent to the operators?

On Wed, Jan 7, 2015 at 6:57 AM, Kyle Mestery mest...@mestery.com wrote:

 On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:

 Hi all,

 I've found out that dnsmasq  2.67 does not work properly for IPv6
 clients when it comes to MAC address matching (it fails to match, and so
 clients get 'no addresses available' response). I've requested version bump
 to 2.67 in: https://review.openstack.org/145482

 Good catch, thanks for finding this Ihar!


 Now, since we've already released Juno with IPv6 DHCP stateful support,
 and DHCP agent still has minimal version set to 2.63 there, we have a
 dilemma on how to manage it from stable perspective.

 Obviously, we should communicate the revealed version dependency to
 deployers via next release notes.

 Should we also backport the minimal version bump to Juno? This will
 result in DHCP agent failing to start in case packagers don't bump dnsmasq
 version with the next Juno release. If we don't bump the version, we may
 leave deployers uninformed about the fact that their IPv6 stateful
 instances won't get any IPv6 address assigned.

 An alternative is to add a special check just for Juno that would WARN
 administrators instead of failing to start DHCP agent.

 Comments?

 Personally, I think the WARN may be the best route to go. Backporting a
 change which bumps the required dnsmasq version seems like it may be harder
 for operators to handle.

 Kyle


 /Ihar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][AdvancedServices] Confusion about the solution of the service chaining!

2015-01-07 Thread A, Keshava
Yes, I agree with Kyle decision.

First we should define what is Service.
Service is within OpenStack infrastructure ? or Service belongs  to NFV 
vNF/Service-VM ?
Based on that its Chaining needs to be defined.
If it is chaining of vNFs(which are service/set of services)  then it  will be 
based on ietf  ‘service header insertion’ at the ingress.
This header will have all the set services  that needs to be executed  across 
vNFV, will be carried in each of the Tennant packet.

So it requires coordinated effort along with NFV/Telco  working groups.

keshava

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Wednesday, January 07, 2015 8:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][AdvancedServices] Confusion about the 
solution of the service chaining!

On Wed, Jan 7, 2015 at 6:25 AM, 
lv.erc...@zte.com.cnmailto:lv.erc...@zte.com.cn wrote:
Hi,

I want to confirm that how is the project about Neutron Services Insertion, 
Chaining, and Steering going, I found that all the code implementation about 
service insertion、service chaining and traffic steering list in JunoPlan were 
Abandoned .

https://wiki.openstack.org/wiki/Neutron/AdvancedServices/JunoPlan

and I also found that we have a new project about GBP and 
group-based-policy-service-chaining be located at:

https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-abstraction

https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

so I'm confused with solution of the service chaining.

We are developing the service chaining feature, so we need to know which one is 
the neutron's choice. Are the blueprints about the service insertion, service 
chaining and traffic steering list in JunoPlan all Abandoned ?
Service chaining isn't in the plan for Kilo [1], but I expect it to be 
something we talk about in Vancouver for the Lxxx release. The NFV/Telco group 
has been talking about this as well. I'm hopeful we can combine efforts and 
come up with a coherent service chaining solution that solves a handful of 
useful use cases during Lxxx.

Thanks,
Kyle

[1] 
http://specs.openstack.org/openstack/neutron-specs/priorities/kilo-priorities.html

BR
Alan






ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] LDAP Assignment Backend Use Survey

2015-01-07 Thread Morgan Fainberg
As a note, since I've seen some responses about users and/or groups on this 
survey, I will be sending a survey about identity out today. This survey is 
strictly about projects/tenants and roles/role assignments in LDAP. 

Sent via mobile

 On Jan 6, 2015, at 11:21, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 
 The Keystone team is evaluating the support of the LDAP Assignment backend 
 within OpenStack and how it is used in deployments. The assignment backend 
 covers “Projects/Tenants”, “Roles/Grants”, and in the case of SQL “Domains”.
 
 There is a concern that the assignment backend implemented against LDAP is 
 falling further and further behind the SQL implementation. To get a good read 
 on the deployments and how the LDAP assignment backend is being used, the 
 Keystone development team would appreciate feedback from the community.  
 Please fill out the following form and let us know if you are using LDAP 
 Assignment, what it provides you that the SQL assignment backend is not 
 providing, and the release of OpenStack (specifically Keystone) you are using.
 
 http://goo.gl/forms/xz6xJQOQf5
 
 This poll is only meant to get information on the use of the LDAP Assignment 
 backend which only contains Projects/Tenants and Roles/Grants.
 
 Cheers,
 Morgan Fainberg
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telco][nfv] Meeting summary and logs - 2015-01-07

2015-01-07 Thread Steve Gordon
Hi all,

Please find minutes and logs for today's OpenStack Telco Working group meeting 
at the locations listed below:

* Minutes:
http://eavesdrop.openstack.org/meetings/telcowg/2015/telcowg.2015-01-07-22.00.html
* Minutes (text): 
http://eavesdrop.openstack.org/meetings/telcowg/2015/telcowg.2015-01-07-22.00.txt
* Log:
http://eavesdrop.openstack.org/meetings/telcowg/2015/telcowg.2015-01-07-22.00.log.html

I would also like to highlight that we discussed the intent to try and have a 
face to face meetup at the operators midcycle meetup, if enough parties are 
interested. Please refer to https://etherpad.openstack.org/p/PHL-ops-meetup for 
more information.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Switching CI back to amd64

2015-01-07 Thread Ben Nemec
On 01/07/2015 11:29 AM, Clint Byrum wrote:
 Excerpts from Derek Higgins's message of 2015-01-07 02:51:41 -0800:
 Hi All,
 I intended to bring this up at this mornings meeting but the train I
 was on had no power sockets (and I had no battery) so sending to the
 list instead.

 We currently run our CI with on images built for i386, we took this
 decision a while back to save memory ( at the time is allowed us to move
 the amount of memory required in our VMs from 4G to 2G (exactly where in
 those bands the hard requirements are I don't know)

 Since then we have had to move back to 3G for the i386 VM as 2G was no
 longer enough so the saving in memory is no longer as dramatic.

 Now that the difference isn't as dramatic, I propose we switch back to
 amd64 (with 4G vms) in order to CI on what would be closer to a
 production deployment and before making the switch wanted to throw the
 idea out there for others to digest.

 This obviously would impact our capacity as we will have to reduce the
 number of testenvs per testenv hosts. Our capacity (in RH1 and roughly
 speaking) allows us to run about 1440 ci jobs per day. I believe we can
 make the switch and still keep capacity above 1200 with a few other changes
 1. Add some more testenv hosts, we have 2 unused hosts at the moment and
 we can probably take 2 of the compute nodes from the overcloud.
 2. Kill VM's at the end of each CI test (as opposed to leaving them
 running until the next CI test kills them), allowing us to more
 successfully overcommit on RAM
 3. maybe look into adding swap on the test env hosts, they don't
 currently have any, so over committing RAM is a problem the the OOM
 killer is handling from time to time (I only noticed this yesterday).

 The other benefit to doing this is that is we were to ever want to CI
 images build with packages (this has come up in previous meetings) we
 wouldn't need to provide i386 packages just for CI, while the rest of
 the world uses the amd64.
 
 +1 on all counts.
 
 It's also important to note that we should actually have a whole new
 rack of servers added to capacity soon (I think soon is about 6 months
 so far, but we are at least committed to it). So this would be, at worst,
 a temporary loss of 240 jobs per day.

Actually it should be sooner than that - hp1 still isn't in the CI
rotation yet, so once that infra change merges (the only thing
preventing us from using it AFAIK) we'll be getting a bunch more
capacity in the much nearer term.  Unless Derek is already counting that
in his estimates above, of course.

I don't feel like we've been all that capacity constrained lately
anyway, so as I said in my other (largely unnecessary, as it turns out)
email, I'm +1 on doing this.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without stopping sevices

2015-01-07 Thread Itsuro ODA
Carl,

Thank you for your comment.

It seems there is no clear opinion about whether bug report or
buleprint is better. 
So I submitted a bug report for the moment so that the requirememt
is not forgotten.
https://bugs.launchpad.net/neutron/+bug/1408488

Thanks.
Itsuro Oda

On Tue, 6 Jan 2015 09:05:19 -0700
Carl Baldwin c...@ecbaldwin.net wrote:

 Itsuro,
 
 It would be desirable to be able to be hide an agent from scheduling
 but no one has stepped up to make this happen.  Come to think of it,
 I'm not sure that a bug or blueprint has been filed yet to address it
 though it is something that I've wanted for a little while now.
 
 Carl
 
 On Mon, Jan 5, 2015 at 4:13 PM, Itsuro ODA o...@valinux.co.jp wrote:
  Neutron experts,
 
  I want to stop scheduling to a specific {dhcp|l3}_agent without
  stopping router/dhcp services on it.
  I expected setting admin_state_up of the agent to False is met
  this demand. But this operation stops all services on the agent
  in actuality. (Is this behavior intended ? It seems there is no
  document for agent API.)
 
  I think admin_state_up of agents should affect only scheduling.
  If it is accepted I will submit a bug report and make a fix.
 
  Or should I propose a blueprint for adding function to stop
  agent's scheduling without stopping services on it ?
 
  I'd like to hear neutron experts' suggestions.
 
  Thanks.
  Itsuro Oda
  --
  Itsuro ODA o...@valinux.co.jp
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Itsuro ODA o...@valinux.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-07 Thread Belmiro Moreira
Hi Anita,
I'm available Tuesday and Wednesday (0800-1600 UTC), Friday (0800-1800 UTC).

Belmiro



On Tuesday, December 30, 2014, Oleg Bondarev obonda...@mirantis.com wrote:



 On Tue, Dec 30, 2014 at 12:56 AM, Anita Kuno ante...@anteaya.info
 javascript:_e(%7B%7D,'cvml','ante...@anteaya.info'); wrote:

 On 12/24/2014 04:07 AM, Oleg Bondarev wrote:
  On Mon, Dec 22, 2014 at 10:08 PM, Anita Kuno ante...@anteaya.info
 javascript:_e(%7B%7D,'cvml','ante...@anteaya.info'); wrote:
 
  On 12/22/2014 01:32 PM, Joe Gordon wrote:
  On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery mest...@mestery.com
 javascript:_e(%7B%7D,'cvml','mest...@mestery.com');
  wrote:
 
  On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno ante...@anteaya.info
 javascript:_e(%7B%7D,'cvml','ante...@anteaya.info');
  wrote:
 
  Rather than waste your time making excuses let me state where we are
  and
  where I would like to get to, also sharing my thoughts about how you
  can
  get involved if you want to see this happen as badly as I have been
  told
  you do.
 
  Where we are:
  * a great deal of foundation work has been accomplished to
 achieve
  parity with nova-network and neutron to the extent that those
 involved
  are ready for migration plans to be formulated and be put in place
  * a summit session happened with notes and intentions[0]
  * people took responsibility and promptly got swamped with other
  responsibilities
  * spec deadlines arose and in neutron's case have passed
  * currently a neutron spec [1] is a work in progress (and it
 needs
  significant work still) and a nova spec is required and doesn't
 have a
  first draft or a champion
 
  Where I would like to get to:
  * I need people in addition to Oleg Bondarev to be available to
  help
  come up with ideas and words to describe them to create the specs
 in a
  very short amount of time (Oleg is doing great work and is a
 fabulous
  person, yay Oleg, he just can't do this alone)
  * specifically I need a contact on the nova side of this complex
  problem, similar to Oleg on the neutron side
  * we need to have a way for people involved with this effort to
  find
  each other, talk to each other and track progress
  * we need to have representation at both nova and neutron weekly
  meetings to communicate status and needs
 
  We are at K-2 and our current status is insufficient to expect this
  work
  will be accomplished by the end of K-3. I will be championing this
  work,
  in whatever state, so at least it doesn't fall off the map. If you
  would
  like to help this effort please get in contact. I will be thinking
 of
  ways to further this work and will be communicating to those who
  identify as affected by these decisions in the most effective
 methods
  of
  which I am capable.
 
  Thank you to all who have gotten us as far as well have gotten in
 this
  effort, it has been a long haul and you have all done great work.
 Let's
  keep going and finish this.
 
  Thank you,
  Anita.
 
  Thank you for volunteering to drive this effort Anita, I am very
 happy
  about this. I support you 100%.
 
  I'd like to point out that we really need a point of contact on the
 nova
  side, similar to Oleg on the Neutron side. IMHO, this is step 1 here
 to
  continue moving this forward.
 
 
  At the summit the nova team marked the nova-network to neutron
 migration
  as
  a priority [0], so we are collectively interested in seeing this
 happen
  and
  want to help in any way possible.   With regard to a nova point of
  contact,
  anyone in nova-specs-core should work, that way we can cover more time
  zones.
 
  From what I can gather the first step is to finish fleshing out the
 first
  spec [1], and it sounds like it would be good to get a few nova-cores
  reviewing it as well.
 
 
 
 
  [0]
 
 
 http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
  [1] https://review.openstack.org/#/c/142456/
 
 
  Wonderful, thank you for the support Joe.
 
  It appears that we need to have a regular weekly meeting to track
  progress in an archived manner.
 
  I know there was one meeting November but I don't know what it was
  called so so far I can't find the logs for that.
 
 
  It wasn't official, we just gathered together on #novamigration.
 Attaching
  the log here.
 
 Ah, that would explain why I couldn't find the log. Thanks for the
 attachment.
 
  So if those affected by this issue can identify what time (UTC please,
  don't tell me what time zone you are in it is too hard to guess what
 UTC
  time you are available) and day of the week you are available for a
  meeting I'll create one and we can start talking to each other.
 
  I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC
 and
  1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100
 UTC.
 
 
  I'm available each weekday 0700-1600 UTC, 1700-1800 UTC is also
 acceptable.
 
  Thanks,
  Oleg
 Wonderful, thank you Oleg. We will aim for a 

Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-07 Thread Miguel Ángel Ajo
You can stop the neutron-dhcp-agent or neutron-l3-agent,  
the agents should go not-active after reporting timeout.

The actual network services (routers, dhcp, etc) will stay
active into the node, but unmanaged. In some cases,
if you have automatic rescheduling of the resources
configured, those will be spawned on other hosts.

Depending on your use case this will be enough or not.
It’s intended for upgrades and maintenance. But not
for controlling resources in a node.


Miguel Ángel Ajo


On Thursday, 8 de January de 2015 at 00:20, Itsuro ODA wrote:

 Carl,
  
 Thank you for your comment.
  
 It seems there is no clear opinion about whether bug report or
 buleprint is better.  
 So I submitted a bug report for the moment so that the requirememt
 is not forgotten.
 https://bugs.launchpad.net/neutron/+bug/1408488
  
 Thanks.
 Itsuro Oda
  
 On Tue, 6 Jan 2015 09:05:19 -0700
 Carl Baldwin c...@ecbaldwin.net (mailto:c...@ecbaldwin.net) wrote:
  
  Itsuro,
   
  It would be desirable to be able to be hide an agent from scheduling
  but no one has stepped up to make this happen. Come to think of it,
  I'm not sure that a bug or blueprint has been filed yet to address it
  though it is something that I've wanted for a little while now.
   
  Carl
   
  On Mon, Jan 5, 2015 at 4:13 PM, Itsuro ODA o...@valinux.co.jp 
  (mailto:o...@valinux.co.jp) wrote:
   Neutron experts,

   I want to stop scheduling to a specific {dhcp|l3}_agent without
   stopping router/dhcp services on it.
   I expected setting admin_state_up of the agent to False is met
   this demand. But this operation stops all services on the agent
   in actuality. (Is this behavior intended ? It seems there is no
   document for agent API.)

   I think admin_state_up of agents should affect only scheduling.
   If it is accepted I will submit a bug report and make a fix.

   Or should I propose a blueprint for adding function to stop
   agent's scheduling without stopping services on it ?

   I'd like to hear neutron experts' suggestions.

   Thanks.
   Itsuro Oda
   --
   Itsuro ODA o...@valinux.co.jp (mailto:o...@valinux.co.jp)


   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

   
   
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
 --  
 Itsuro ODA o...@valinux.co.jp (mailto:o...@valinux.co.jp)
  
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Switching CI back to amd64

2015-01-07 Thread Ben Nemec
+1 to all of this.

On 01/07/2015 04:51 AM, Derek Higgins wrote:
 Hi All,
 I intended to bring this up at this mornings meeting but the train I
 was on had no power sockets (and I had no battery) so sending to the
 list instead.
 
 We currently run our CI with on images built for i386, we took this
 decision a while back to save memory ( at the time is allowed us to move
 the amount of memory required in our VMs from 4G to 2G (exactly where in
 those bands the hard requirements are I don't know)
 
 Since then we have had to move back to 3G for the i386 VM as 2G was no
 longer enough so the saving in memory is no longer as dramatic.
 
 Now that the difference isn't as dramatic, I propose we switch back to
 amd64 (with 4G vms) in order to CI on what would be closer to a
 production deployment and before making the switch wanted to throw the
 idea out there for others to digest.
 
 This obviously would impact our capacity as we will have to reduce the
 number of testenvs per testenv hosts. Our capacity (in RH1 and roughly
 speaking) allows us to run about 1440 ci jobs per day. I believe we can
 make the switch and still keep capacity above 1200 with a few other changes
 1. Add some more testenv hosts, we have 2 unused hosts at the moment and
 we can probably take 2 of the compute nodes from the overcloud.
 2. Kill VM's at the end of each CI test (as opposed to leaving them
 running until the next CI test kills them), allowing us to more
 successfully overcommit on RAM
 3. maybe look into adding swap on the test env hosts, they don't
 currently have any, so over committing RAM is a problem the the OOM
 killer is handling from time to time (I only noticed this yesterday).
 
 The other benefit to doing this is that is we were to ever want to CI
 images build with packages (this has come up in previous meetings) we
 wouldn't need to provide i386 packages just for CI, while the rest of
 the world uses the amd64.
 
 Thanks,
 Derek.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-07 Thread Kevin Benton
The problem is that if you just stop the service, it not only removes it
from scheduling, but it stops it from receiving updates to floating IP
changes, interface changes, etc. I think it would be nice to have a way to
explicitly stop it from being scheduled new routers, but still act as a
functioning L3 agent otherwise.

On Wed, Jan 7, 2015 at 3:30 PM, Miguel Ángel Ajo majop...@redhat.com
wrote:

 You can stop the neutron-dhcp-agent or neutron-l3-agent,
 the agents should go not-active after reporting timeout.

 The actual network services (routers, dhcp, etc) will stay
 active into the node, but unmanaged. In some cases,
 if you have automatic rescheduling of the resources
 configured, those will be spawned on other hosts.

 Depending on your use case this will be enough or not.
 It’s intended for upgrades and maintenance. But not
 for controlling resources in a node.

 Miguel Ángel Ajo

 On Thursday, 8 de January de 2015 at 00:20, Itsuro ODA wrote:

 Carl,

 Thank you for your comment.

 It seems there is no clear opinion about whether bug report or
 buleprint is better.
 So I submitted a bug report for the moment so that the requirememt
 is not forgotten.
 https://bugs.launchpad.net/neutron/+bug/1408488

 Thanks.
 Itsuro Oda

 On Tue, 6 Jan 2015 09:05:19 -0700
 Carl Baldwin c...@ecbaldwin.net wrote:

 Itsuro,

 It would be desirable to be able to be hide an agent from scheduling
 but no one has stepped up to make this happen. Come to think of it,
 I'm not sure that a bug or blueprint has been filed yet to address it
 though it is something that I've wanted for a little while now.

 Carl

 On Mon, Jan 5, 2015 at 4:13 PM, Itsuro ODA o...@valinux.co.jp wrote:

 Neutron experts,

 I want to stop scheduling to a specific {dhcp|l3}_agent without
 stopping router/dhcp services on it.
 I expected setting admin_state_up of the agent to False is met
 this demand. But this operation stops all services on the agent
 in actuality. (Is this behavior intended ? It seems there is no
 document for agent API.)

 I think admin_state_up of agents should affect only scheduling.
 If it is accepted I will submit a bug report and make a fix.

 Or should I propose a blueprint for adding function to stop
 agent's scheduling without stopping services on it ?

 I'd like to hear neutron experts' suggestions.

 Thanks.
 Itsuro Oda
 --
 Itsuro ODA o...@valinux.co.jp


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Itsuro ODA o...@valinux.co.jp


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-07 Thread Itsuro ODA
Miguel, Kevin,

Thank you for your comments.

The motivation of this (from our customer) is about
design of network-node reduction procedure.

They prefer stop scheduling for agents on the node
at first then individual services are moved by
*-remove and *-add gradually rather than stopping all
services on the node at once.

As Kevin mentioned, agents must be alive so that 
operations for existing services must be continued. 

Thanks.
Itsuro Oda

On Wed, 7 Jan 2015 15:52:56 -0800
Kevin Benton blak...@gmail.com wrote:

 The problem is that if you just stop the service, it not only removes it
 from scheduling, but it stops it from receiving updates to floating IP
 changes, interface changes, etc. I think it would be nice to have a way to
 explicitly stop it from being scheduled new routers, but still act as a
 functioning L3 agent otherwise.
 
 On Wed, Jan 7, 2015 at 3:30 PM, Miguel Angel Ajo majop...@redhat.com
 wrote:
 
  You can stop the neutron-dhcp-agent or neutron-l3-agent,
  the agents should go not-active after reporting timeout.
 
  The actual network services (routers, dhcp, etc) will stay
  active into the node, but unmanaged. In some cases,
  if you have automatic rescheduling of the resources
  configured, those will be spawned on other hosts.
 
  Depending on your use case this will be enough or not.
  It’s intended for upgrades and maintenance. But not
  for controlling resources in a node.
 
  Miguel Angel Ajo
 
  On Thursday, 8 de January de 2015 at 00:20, Itsuro ODA wrote:
 
  Carl,
 
  Thank you for your comment.
 
  It seems there is no clear opinion about whether bug report or
  buleprint is better.
  So I submitted a bug report for the moment so that the requirememt
  is not forgotten.
  https://bugs.launchpad.net/neutron/+bug/1408488
 
  Thanks.
  Itsuro Oda
 
  On Tue, 6 Jan 2015 09:05:19 -0700
  Carl Baldwin c...@ecbaldwin.net wrote:
 
  Itsuro,
 
  It would be desirable to be able to be hide an agent from scheduling
  but no one has stepped up to make this happen. Come to think of it,
  I'm not sure that a bug or blueprint has been filed yet to address it
  though it is something that I've wanted for a little while now.
 
  Carl
 
  On Mon, Jan 5, 2015 at 4:13 PM, Itsuro ODA o...@valinux.co.jp wrote:
 
  Neutron experts,
 
  I want to stop scheduling to a specific {dhcp|l3}_agent without
  stopping router/dhcp services on it.
  I expected setting admin_state_up of the agent to False is met
  this demand. But this operation stops all services on the agent
  in actuality. (Is this behavior intended ? It seems there is no
  document for agent API.)
 
  I think admin_state_up of agents should affect only scheduling.
  If it is accepted I will submit a bug report and make a fix.
 
  Or should I propose a blueprint for adding function to stop
  agent's scheduling without stopping services on it ?
 
  I'd like to hear neutron experts' suggestions.
 
  Thanks.
  Itsuro Oda
  --
  Itsuro ODA o...@valinux.co.jp
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  --
  Itsuro ODA o...@valinux.co.jp
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Kevin Benton

-- 
Itsuro ODA o...@valinux.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Is anyone working on the following patch?

2015-01-07 Thread Dolph Mathews
On Wed, Jan 7, 2015 at 10:32 AM, Lance Bragstad lbrags...@gmail.com wrote:

 https://review.openstack.org/#/c/113586/ is owned by dstanek but I
 understand he is out this week at a conference?


Correct.


 It might be worth dropping in #openstack-keystone and seeing if dstanek
 would be alright with you picking it up, since you're building on it.


I CC'd him here, as I figure async communication might be easier for him if
he's mostly AFK.



 On Wed, Jan 7, 2015 at 12:21 AM, Ajaya Agrawal ajku@gmail.com wrote:

 https://review.openstack.org/#/c/113586/

 Two of my patches depend on this patch.
 https://review.openstack.org/#/c/113277/
 https://review.openstack.org/#/c/110575/


 Cheers,
 Ajaya

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Project list (official un-official)

2015-01-07 Thread Adam Lawson
I've been looking for a list of projects that folks are working on. The
official list is simple to find for those but when talking about things
like Octavia, Libra and other non-official/non-core programs, knowing what
people are working on would be pretty interesting.

Does an exhaustive list like this exist somewhere?


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] unit test migration failure specific to MySQL/MariaDB - 'uuid': used in a foreign key constraint 'block_device_mapping_instance_uuid_fkey'

2015-01-07 Thread Mike Bayer
OK so it’s looking like sql_mode=‘TRADITIONAL’ that allows it to work.  So that 
is most of it.   My MariaDB has no default sql_mode but oslo.db should be 
setting this, but in any case this seems more like a local oslo.db connection 
type of thing that I can track down myself, so most of the mystery solved! (at 
least the part that I didn’t feel like getting into….which I did anyway).


Mike Bayer mba...@redhat.com wrote:

 working with sdague on IRC, the first thing I’m seeing is that my MariaDB 
 server is disallowing a change in column that is UNIQUE and has an FK 
 pointing to it, and this is distinctly different from a straight up MySQL 
 server (see below).  
 
 http://paste.openstack.org/raw/155896/
 
 
 old school MySQL:
 
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 4840
 Server version: 5.6.15 Homebrew
 
 Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
 
 Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective
 owners.
 
 Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
 mysql create table foo (id int, blah int, primary key (id), unique key 
 (blah)) engine=InnoDB;
 Query OK, 0 rows affected (0.01 sec)
 
 mysql create table bar(id int, blah_fk int, primary key (id), foreign key 
 (blah_fk) references foo(blah)) engine=InnoDB;
 Query OK, 0 rows affected (0.01 sec)
 
 mysql alter table foo change column blah blah int not null;
 Query OK, 0 rows affected (0.02 sec)
 Records: 0  Duplicates: 0  Warnings: 0
 
 mysql 
 
 
 
 MariaDB 10:
 
 MariaDB [test] create table foo (id int, blah int, primary key (id), unique 
 key (blah));
 Query OK, 0 rows affected (0.09 sec)
 
 MariaDB [test] create table bar(id int, blah_fk int, primary key (id), 
 foreign key (blah_fk) references foo(blah));
 Query OK, 0 rows affected (0.12 sec)
 
 MariaDB [test] alter table foo change column blah blah int not null;
 ERROR 1833 (HY000): Cannot change column 'blah': used in a foreign key 
 constraint 'bar_ibfk_1' of table 'test.bar'
 MariaDB [test] 
 
 Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 
 On 1/6/2015 5:40 PM, Mike Bayer wrote:
 Hello -
 
 Victor Sergeyev and I are both observing the following test failure which 
 occurs with all the tests underneath 
 nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.This is 
 against master with a brand new tox environment and everything at the 
 default.
 
 It does not seem to be occurring on gates that run these tests and 
 interestingly the tests seem to complete very quickly (under seven seconds) 
 on the gate as well; the failures here take between 50-100 seconds to 
 occur, not fully deterministically, and only on the MySQL backend; the 
 Postgresql and SQLite versions of these tests pass.  I’m running against 
 MariaDB server 10.0.14 with Python 2.7.8 on Fedora 21.
 
 Below is the test just for test_walk_versions, but the warnings (not 
 necessarily the failures themselves) here also occur for test_migration_267 
 as well as test_innodb_tables.
 
 I’m still looking into what the cause of this is, I’d imagine it’s 
 something related to newer MySQL versions or perhaps MariaDB vs. MySQL, I’m 
 just putting it up here in case someone already knows what this is or has 
 some clue to save me some time figuring it out.  I apologize if I’m just 
 doing something dumb, I’ve only recently begun to run Nova’s test suite in 
 full against all backends, so I haven’t yet put intelligent thought into 
 this nor have I tried to yet look at the migration in question causing the 
 problem.  Will do that next.
 
 
 [mbayer@thinkpad nova]$ tox -e py27 -- 
 nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions
 py27 develop-inst-noop: /home/mbayer/dev/openstack/nova
 py27 runtests: PYTHONHASHSEED='0'
 py27 runtests: commands[0] | find . -type f -name *.pyc -delete
 py27 runtests: commands[1] | bash tools/pretty_tox.sh 
 nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions
 running testr
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./nova/tests} --list
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./nova/tests}  --load-list /tmp/tmpw7zqhE
 
 2015-01-06 18:28:12.913 32435 WARNING oslo.db.sqlalchemy.session 
 [req-5cc6731f-00ef-43df-8aec-4914a44d12c5 ] MySQL SQL mode is '', consider 
 enabling TRADITIONAL or STRICT_ALL_TABLES
 {0} 
 nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions
  [51.553131s] ... FAILED
 
 Captured traceback:
 ~~~
Traceback (most recent call last):
  File 

Re: [openstack-dev] [nova] unit test migration failure specific to MySQL/MariaDB - 'uuid': used in a foreign key constraint 'block_device_mapping_instance_uuid_fkey'

2015-01-07 Thread Mike Bayer
working with sdague on IRC, the first thing I’m seeing is that my MariaDB 
server is disallowing a change in column that is UNIQUE and has an FK pointing 
to it, and this is distinctly different from a straight up MySQL server (see 
below).  

http://paste.openstack.org/raw/155896/


old school MySQL:

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4840
Server version: 5.6.15 Homebrew

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql create table foo (id int, blah int, primary key (id), unique key (blah)) 
engine=InnoDB;
Query OK, 0 rows affected (0.01 sec)

mysql create table bar(id int, blah_fk int, primary key (id), foreign key 
(blah_fk) references foo(blah)) engine=InnoDB;
Query OK, 0 rows affected (0.01 sec)

mysql alter table foo change column blah blah int not null;
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql 



MariaDB 10:

MariaDB [test] create table foo (id int, blah int, primary key (id), unique 
key (blah));
Query OK, 0 rows affected (0.09 sec)

MariaDB [test] create table bar(id int, blah_fk int, primary key (id), foreign 
key (blah_fk) references foo(blah));
Query OK, 0 rows affected (0.12 sec)

MariaDB [test] alter table foo change column blah blah int not null;
ERROR 1833 (HY000): Cannot change column 'blah': used in a foreign key 
constraint 'bar_ibfk_1' of table 'test.bar'
MariaDB [test] 

Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 
 
 On 1/6/2015 5:40 PM, Mike Bayer wrote:
 Hello -
 
 Victor Sergeyev and I are both observing the following test failure which 
 occurs with all the tests underneath 
 nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.This is 
 against master with a brand new tox environment and everything at the 
 default.
 
 It does not seem to be occurring on gates that run these tests and 
 interestingly the tests seem to complete very quickly (under seven seconds) 
 on the gate as well; the failures here take between 50-100 seconds to occur, 
 not fully deterministically, and only on the MySQL backend; the Postgresql 
 and SQLite versions of these tests pass.  I’m running against MariaDB server 
 10.0.14 with Python 2.7.8 on Fedora 21.
 
 Below is the test just for test_walk_versions, but the warnings (not 
 necessarily the failures themselves) here also occur for test_migration_267 
 as well as test_innodb_tables.
 
 I’m still looking into what the cause of this is, I’d imagine it’s something 
 related to newer MySQL versions or perhaps MariaDB vs. MySQL, I’m just 
 putting it up here in case someone already knows what this is or has some 
 clue to save me some time figuring it out.  I apologize if I’m just doing 
 something dumb, I’ve only recently begun to run Nova’s test suite in full 
 against all backends, so I haven’t yet put intelligent thought into this nor 
 have I tried to yet look at the migration in question causing the problem.  
 Will do that next.
 
 
 [mbayer@thinkpad nova]$ tox -e py27 -- 
 nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions
 py27 develop-inst-noop: /home/mbayer/dev/openstack/nova
 py27 runtests: PYTHONHASHSEED='0'
 py27 runtests: commands[0] | find . -type f -name *.pyc -delete
 py27 runtests: commands[1] | bash tools/pretty_tox.sh 
 nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions
 running testr
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./nova/tests} --list
 running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
 OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
 OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./nova/tests}  --load-list /tmp/tmpw7zqhE
 
 2015-01-06 18:28:12.913 32435 WARNING oslo.db.sqlalchemy.session 
 [req-5cc6731f-00ef-43df-8aec-4914a44d12c5 ] MySQL SQL mode is '', consider 
 enabling TRADITIONAL or STRICT_ALL_TABLES
 {0} 
 nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions
  [51.553131s] ... FAILED
 
 Captured traceback:
 ~~~
 Traceback (most recent call last):
   File nova/tests/unit/db/test_migrations.py, line 151, in 
 test_walk_versions
 self.walk_versions(self.snake_walk, self.downgrade)
   File 
 /home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/test_migrations.py,
  line 193, in walk_versions
 self.migrate_up(version, with_data=True)
   File nova/tests/unit/db/test_migrations.py, line 148, in migrate_up
 super(NovaMigrationsCheckers, self).migrate_up(version, with_data)
   

Re: [openstack-dev] Project list (official un-official)

2015-01-07 Thread Jeremy Stanley
On 2015-01-07 11:51:21 -0800 (-0800), Adam Lawson wrote:
 I've been looking for a list of projects that folks are working on. The
 official list is simple to find for those but when talking about things like
 Octavia, Libra and other non-official/non-core programs, knowing what people
 are working on would be pretty interesting.
 
 Does an exhaustive list like this exist somewhere?

https://git.openstack.org/

(We need to bump the cgit page length again though, it's spanning
two pages at the moment.)
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.middleware 0.3.0 released

2015-01-07 Thread Doug Hellmann
The Oslo team is pleased to announce the release of
oslo.middleware 0.3.0: Oslo Middleware library

The primary reason for this release is to move the code
out of the oslo namespace package as part of
https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages

This release also adds a dependency on oslo.context. 

For more details, please see the git log history below and
 http://launchpad.net/oslo.middleware/+milestone/0.3.0

Please report issues through launchpad:
 http://bugs.launchpad.net/oslo.middleware



Changes in /home/dhellmann/repos/openstack/oslo.middleware  0.2.0..0.3.0

8e06ca5 Move files out of the namespace package
7c8e3e1 Don't use default value in LimitingReader
6824065 switch to oslo.context
b25e8c5 Workflow documentation is now in infra-manual

  diffstat (except docs and test files):

 CONTRIBUTING.rst |   7 +-
 openstack-common.conf|   2 -
 oslo/middleware/__init__.py  |  25 +++---
 oslo/middleware/base.py  |  45 +-
 oslo/middleware/catch_errors.py  |  32 +--
 oslo/middleware/correlation_id.py|  16 +---
 oslo/middleware/debug.py |  49 +--
 oslo/middleware/i18n.py  |  35 
 oslo/middleware/openstack/__init__.py|   0
 oslo/middleware/openstack/common/__init__.py |  17 
 oslo/middleware/openstack/common/context.py  | 126 ---
 oslo/middleware/opts.py  |  45 --
 oslo/middleware/request_id.py|  29 +-
 oslo/middleware/sizelimit.py |  79 +
 oslo_middleware/__init__.py  |  23 +
 oslo_middleware/base.py  |  56 
 oslo_middleware/catch_errors.py  |  43 +
 oslo_middleware/correlation_id.py|  27 ++
 oslo_middleware/debug.py |  60 +
 oslo_middleware/i18n.py  |  35 
 oslo_middleware/opts.py  |  45 ++
 oslo_middleware/request_id.py|  40 +
 oslo_middleware/sizelimit.py |  95 
 requirements.txt |   1 +
 setup.cfg|   3 +-
 tests/test_sizelimit.py  |   8 ++
 tests/test_warning.py|  61 +
 tox.ini  |   6 +-
 34 files changed, 769 insertions(+), 488 deletions(-)

  Requirements updates:

 diff --git a/requirements.txt b/requirements.txt
 index 275fa4f..1b66bf0 100644
 --- a/requirements.txt
 +++ b/requirements.txt
 @@ -7,0 +8 @@ oslo.config=1.4.0  # Apache-2.0
 +oslo.context=0.1.0 # Apache-2.0
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Is anyone working on the following patch?

2015-01-07 Thread dstanek
Hmmm... Might want to check with Morgan to make sure we want that. I thought he 
had (and probably merged) an alternative. Otherwise I'm fine if you want to 
pick it up and rebase it.



—
Sent from Mailbox

On Wed, Jan 7, 2015 at 2:48 PM, Dolph Mathews dolph.math...@gmail.com
wrote:

 On Wed, Jan 7, 2015 at 10:32 AM, Lance Bragstad lbrags...@gmail.com wrote:
 https://review.openstack.org/#/c/113586/ is owned by dstanek but I
 understand he is out this week at a conference?


 Correct.
 It might be worth dropping in #openstack-keystone and seeing if dstanek
 would be alright with you picking it up, since you're building on it.

 I CC'd him here, as I figure async communication might be easier for him if
 he's mostly AFK.

 On Wed, Jan 7, 2015 at 12:21 AM, Ajaya Agrawal ajku@gmail.com wrote:

 https://review.openstack.org/#/c/113586/

 Two of my patches depend on this patch.
 https://review.openstack.org/#/c/113277/
 https://review.openstack.org/#/c/110575/


 Cheers,
 Ajaya

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Is anyone working on the following patch?

2015-01-07 Thread Steve Martinelli
Ajaya, also, I think you can go ahead and rebase your patches on master, 
as
I don't see a need for them to depend on the patch you mention.

AFAICT you are adding new caching for identity and trusts, we already have
caching for a few other spots so this shouldn't conflict with dstanek's 
work
of improving caching, as a whole.

Aside from the usual config option conflicts, I don't see any other files 
that
will conflict.

Steve

Dolph Mathews dolph.math...@gmail.com wrote on 01/07/2015 02:48:21 PM:

 From: Dolph Mathews dolph.math...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 01/07/2015 02:57 PM
 Subject: Re: [openstack-dev] [Keystone] Is anyone working on the 
 following patch?
 
 On Wed, Jan 7, 2015 at 10:32 AM, Lance Bragstad lbrags...@gmail.com 
wrote:
 https://review.openstack.org/#/c/113586/ is owned by dstanek but I 
 understand he is out this week at a conference?
 
 Correct.
  
 It might be worth dropping in #openstack-keystone and seeing if 
 dstanek would be alright with you picking it up, since you're building 
on it.
 
 I CC'd him here, as I figure async communication might be easier for
 him if he's mostly AFK.
  
 
 On Wed, Jan 7, 2015 at 12:21 AM, Ajaya Agrawal ajku@gmail.com 
wrote:
 https://review.openstack.org/#/c/113586/
 
 Two of my patches depend on this patch.
 https://review.openstack.org/#/c/113277/
 https://review.openstack.org/#/c/110575/
 
 Cheers,
 Ajaya
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] local and gate pep8 check comparsion

2015-01-07 Thread Chen CH Ji
Thanks, I did the 'tox -r -e pep8' in my local env and it seems the problem
is gone ... thanks a lot

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Matt Riedemann mrie...@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Date:   01/06/2015 05:29 PM
Subject:Re: [openstack-dev] [nova] local and gate pep8 check comparsion





On 1/6/2015 10:21 AM, Chen CH Ji wrote:
 I got following error in patch
 https://review.openstack.org/#/c/137009/ from Jenkins

 _2015-01-06 12:24:20.445_
 
http://logs.openstack.org/09/137009/2/check/gate-nova-pep8/b868ad8/console.html#_2015-01-06_12_24_20_445
 |
 ./nova/compute/manager.py:5325:13: N331  Use LOG.warning due to
 compatibility with py3
 _2015-01-06 12:24:20.445_
 
http://logs.openstack.org/09/137009/2/check/gate-nova-pep8/b868ad8/console.html#_2015-01-06_12_24_20_445
 |
 ./nova/compute/manager.py:5726:13: N331  Use LOG.warning due to
 compatibility with py3
 _2015-01-06 12:24:20.916_
 
http://logs.openstack.org/09/137009/2/check/gate-nova-pep8/b868ad8/console.html#_2015-01-06_12_24_20_916
 |
 ERROR: InvocationError:
 '/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/bin/flake8'

 but I didn't get it either by ./run_test.sh -8 or tox -e pep8 in my
 local test env, I am pretty sure I have the latest nova code so I think
 I should get same result ?

 Thanks a lot

 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
 District, Beijing 100193, PRC


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Stale venv?  Try tox -r -e pep8.

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Kumar, Om (Cloud OS RD)
If it's a separate project, can it be extended to perform out of band discovery 
too..? That way there will be a single service to perform in-band as well as 
out of band discoveries.. May be it could follow driver framework for 
discovering nodes, where one driver could be native (in-band) and other could 
be iLO specific etc... 

Just a thought.

-Om

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: 07 January 2015 14:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:
 So is it possible to just integrate this project into ironic? I mean when you 
 create an ironic node, it will start discover in the background. So we don't 
 need two services?
Well, the decision on the summit was that it's better to keep it separate. 
Please see https://review.openstack.org/#/c/135605/ for details on future 
interaction between discoverd and Ironic.

 Just a thought, thanks.

 BR
 Zhou Zhenzan

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Monday, January 5, 2015 4:49 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

 On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:
 Hi, Dmitry

 I think this is a good project.
 I got one question: what is the relationship with ironic-python-agent?
 Thanks.
 Hi!

 No relationship right now, but I'm hoping to use IPA as a base for 
 introspection ramdisk in the (near?) future.

 BR
 Zhou Zhenzan

 -Original Message-
 From: Dmitry Tantsur [mailto:dtant...@redhat.com]
 Sent: Thursday, December 11, 2014 10:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Ironic] ironic-discoverd status update

 Hi all!

 As you know I actively promote ironic-discoverd project [1] as one of the 
 means to do hardware inspection for Ironic (see e.g. spec [2]), so I decided 
 it's worth to give some updates to the community from time to time. This 
 email is purely informative, you may safely skip it, if you're not 
 interested.

 Background
 ==

 The discoverd project (I usually skip the ironic- part when talking 
 about it) solves the problem of populating information about a node 
 in Ironic database without help of any vendor-specific tool. This 
 information usually includes Nova scheduling properties (CPU, RAM, 
 disk
 size) and MAC's for ports.

 Introspection is done by booting a ramdisk on a node, collecting data there 
 and posting it back to discoverd HTTP API. Thus actually discoverd consists 
 of 2 components: the service [1] and the ramdisk [3]. The service handles 2 
 major tasks:
 * Processing data posted by the ramdisk, i.e. finding the node in Ironic 
 database and updating node properties with new data.
 * Managing iptables so that the default PXE environment for 
 introspection does not interfere with Neutron

 The project was born from a series of patches to Ironic itself after we 
 discovered that this change is going to be too intrusive. Discoverd was 
 actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
 After the Paris summit, we agreed on bringing it closer to the Ironic 
 upstream, and now discoverd is hosted on StackForge and tracks bugs on 
 Launchpad.

 Future
 ==

 The basic feature of discoverd: supply Ironic with properties required for 
 scheduling, is pretty finished as of the latest stable series 0.2.

 However, more features are planned for release 1.0.0 this January [5].
 They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
 MAC's.

 Plugability
 ~~~

 An interesting feature of discoverd is support for plugins, which I prefer 
 to call hooks. It's possible to hook into the introspection data processing 
 chain in 2 places:
 * Before any data processing. This opens opportunity to adopt discoverd to 
 ramdisks that have different data format. The only requirement is that the 
 ramdisk posts a JSON object.
 * After a node is found in Ironic database and ports are created for MAC's, 
 but before any actual data update. This gives an opportunity to alter, which 
 properties discoverd is going to update.

 Actually, even the default logic of update Node.properties is 
 contained in a plugin - see SchedulerHook in 
 ironic_discoverd/plugins/standard.py
 [6]. This plugability opens wide opportunities for integrating with 3rd 
 party ramdisks and CMDB's (which as we know Ironic is not ;).

 Enrolling
 ~

 Some people have found it limiting that the introspection requires power 
 credentials (IPMI user name and password) to be already set. The recent set 
 of patches [7] introduces a possibility to request manual power on of the 
 machine and update IPMI credentials via the ramdisk to the expected values. 
 Note that support of this feature in the reference ramdisk [3] is not ready 
 yet. Also note that this scenario 

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Matt Keenan

On 01/07/15 14:24, Kumar, Om (Cloud OS RD) wrote:

If it's a separate project, can it be extended to perform out of band discovery 
too..? That way there will be a single service to perform in-band as well as 
out of band discoveries.. May be it could follow driver framework for 
discovering nodes, where one driver could be native (in-band) and other could 
be iLO specific etc...



I believe the following spec outlines plans for out-of-band discovery:
  https://review.openstack.org/#/c/100951/

No idea what the progress is with regard to implementation within the 
Kilo cycle though.


cheers

Matt


Just a thought.

-Om

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 14:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean when you 
create an ironic node, it will start discover in the background. So we don't 
need two services?

Well, the decision on the summit was that it's better to keep it separate. 
Please see https://review.openstack.org/#/c/135605/ for details on future 
interaction between discoverd and Ironic.


Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for 
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of the means 
to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's 
worth to give some updates to the community from time to time. This email is 
purely informative, you may safely skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the ironic- part when talking
about it) solves the problem of populating information about a node
in Ironic database without help of any vendor-specific tool. This
information usually includes Nova scheduling properties (CPU, RAM,
disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting data there and 
posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 
components: the service [1] and the ramdisk [3]. The service handles 2 major 
tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
After the Paris summit, we agreed on bringing it closer to the Ironic upstream, 
and now discoverd is hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties required for 
scheduling, is pretty finished as of the latest stable series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I prefer to 
call hooks. It's possible to hook into the introspection data processing chain 
in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd to 
ramdisks that have different data format. The only requirement is that the 
ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for MAC's, but 
before any actual data update. This gives an opportunity to alter, which 
properties discoverd is going to update.

Actually, even the default logic of update Node.properties is
contained in a plugin - see SchedulerHook in
ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with 3rd party 
ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent set of 
patches [7] introduces a possibility to request manual power on of the 

Re: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2015-01-07 Thread Moshe Levi
Also this one:
Nova: Add spec for VIF Driver for SR-IOV 
InfiniBandhttps://review.openstack.org/131729
https://review.openstack.org/#/c/131729/


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 07, 2015 5:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][sriov] SRIOV related specs pending for 
approval

Hi Joe and others,

One of the topics for tomorrow's NOVA IRC is about k-2 spec exception process. 
I'd like to bring up the following specs up again for consideration:

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we'd like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we've submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads


Thanks for your kindly consideration.

-Robert

On 12/22/14, 1:20 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Fri, Dec 19, 2014 at 6:53 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi Joe,

See this thread on the SR-IOV CI from Irena and Sandhya:

http://lists.openstack.org/pipermail/openstack-dev/2014-November/050658.html

http://lists.openstack.org/pipermail/openstack-dev/2014-November/050755.html

I believe that Intel is building a CI system to test SR-IOV as well.

Thanks for the clarification.


Thanks,
Robert


On 12/18/14, 9:13 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi,

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we'd like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we've submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Neutron-Network settings support for vnic-type
https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

Now that the specs deadline is approaching, I'd like to bring them up in here 
for exception considerations. A lot of works have been put into them. And we'd 
like to see them get through for Kilo.

We haven't started the spec exception process yet.


Regarding CI for PCI passthrough and SR-IOV, see the attached thread.

Can you share this via a link to something on 
http://lists.openstack.org/pipermail/openstack-dev/


thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pip 6.0.6 breaking py2* jobs - bug 1407736

2015-01-07 Thread Matt Riedemann



On 1/6/2015 11:07 AM, Matt Riedemann wrote:



On 1/6/2015 7:27 AM, Ihar Hrachyshka wrote:


On 01/06/2015 03:09 AM, Matt Riedemann wrote:



On 1/5/2015 2:16 PM, Doug Hellmann wrote:


On Jan 5, 2015, at 12:22 PM, Doug Hellmann d...@doughellmann.com
wrote:



On Jan 5, 2015, at 12:00 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


There is a deprecation warning in pip 6.0.6 which is making the
py26 (on stable branches) and py27 jobs hit subunit log sizes of
over 50 MB which makes the job fail.

A logstash query shows this started happening around 1/3 which is
when pip 6.0.6 was released. In Nova alone there are nearly 18
million hits of the deprecation warning.

Should we temporarily block so that pip  6.0.6?

https://bugs.launchpad.net/nova/+bug/1407736


I think this is actually a change in pkg_resources (in the
setuptools dist) [1], being triggered by stevedore using
require=False to avoid checking dependencies when plugins are loaded.

Doug

[1]
https://bitbucket.org/pypa/setuptools/commits/b1c7a311fb8e167d026126f557f849450b859502




After some discussion with Jason Coombs and dstufft, a version of
setuptools with a split API to replace the deprecated option was
released. I have a patch up to teach stevedore about the new
methods[1].

Doug

[1] https://review.openstack.org/#/c/145042/1






--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The stevedore patch was merged. Do we need a release of stevedore and
a global-requirements update to then get the deprecation warnings
fixed in nova (on master and stable/juno)?



I guess so. Also, Icehouse is affected too. I've checked Nova
requirements.txt for Icehouse, and we don't cap steverore version, so a
new release will be automatically propagated to all new jobs.

/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I've closed the nova bug as a duplicate of the stevedore bug since the
latest release of stevedore fixed the problem for my nova change on
stable/juno, thanks Doug!



I just noticed that this is still an issue with the paste library, 
unfortunately.  You can see the hits in a nova stable/juno run here [1].


There are 20736 instances of the deprecation warning in that run.

[1] 
http://logs.openstack.org/74/145374/1/check/gate-nova-python27/6201323/console.html#_2015-01-06_23_34_35_477


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] [Heat] validating properties of Sahara resources in Heat

2015-01-07 Thread michael mccune

hi Pavlo,

i'm not sure i can answer all these questions but i'll give it my best 
shot and hopefully others will chime in =)


On 01/05/2015 08:17 AM, Pavlo Shchelokovskyy wrote:

floating_ip_pool:

I was pointed that Sahara could be configured to use netns/proxy to
access the cluster VMs instead of floating IPs.

My questions are:
- Can that particular configuration setting (netns/proxy) be assessed
via saharaclient?


i don't think so. these are configured through the conf file associated 
with the controller(s) and i don't think we expose an endpoint for 
querying them.



- What would be the result of providing floating_ip_pool when Sahara is
indeed configured with netns/proxy?


i think it will accept the floating_ip_pool values during cluster creation.


   Is it going to function normally, having just wasted several floating
IPs from quota?


if sahara is configured for netns/proxy then it should use that method 
for accessing the nodes. that being said, i think if you provide 
floating ip pools then those will get sent along to the provisioning 
engine, so it may waste the IPs.


this is a case where we could probably check for these values and either 
produce an error or sanitize them. i'll have to test this in a live 
environment.



- And more crucial, what would happen if Sahara is _not_ configured to
use netns/proxy and not provided with floating_ip_pool?


if sahara is configured to _not_ use netns/proxy, but _is_ configured 
for floating pool then you will get an error for not providing the 
floating pool id.



   Can that lead to cluster being created (at least VMs for it spawned)
but Sahara would not be able to access them for configuration?


i don't think so. i think you will get an error when creating the 
cluster(not the template).



   Would Sahara in that case kill the cluster/shutdown VMs or hang in
some cluster failed state?


i don't think it would get that far, you should see an error when 
creating the cluster.



neutron_management_network:
I understand the point that it is redundant to use it in both resources
(although we are stuck with deprecation period as those are part of Juno
release already).

Still, my questions are:
- would this property passed during creation of Cluster override the one
passed during creation of Cluster Template?


in this case, i think the new value will override the original value in 
the template.



- what would happen if I set this property (pass it via saharaclient)
when Nova-network is in use?


i might need a little clarification on this question. but, if you have 
sahara configured to _not_ use neutron and you supply a 
neutron_management_network during cluster template creation, then i 
think sahara will record the network but it won't actually try to 
connect over that network.


this may be another area where we could produce an error if the network 
is supplied.



- what if I _do not_ pass this property and Neutron has several networks
available?


i think this will result in an error during the cluster creation. i know 
sahara will produce an error in this case, i'm just unsure as to when it 
will be generated.



The reason I'm asking is that in Heat we try to follow fail-fast
approach, especially for billable resources,
to avoid situation when a (potentially huge) stack is being created and
breaks on last or second-to-last resource,
leaving user with many resources spawned (even if for a short time if
the stack rollback is enabled)
which might cost a hefty sum of money for nothing. That is why we are
trying to validate the template
as thoroughly as we can before starting to create any actual resources
in the cloud.


i totally agree with the fail-fast approach, and in general i think 
that sahara will attempt to follow that. in the cases you have described 
above i think the most likely fail conditions will be when attempting 
cluster creation. but, i don't think that the provisioning engine will 
be called unless we can validate these networks.


hopefully this helps,
mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reply: [Nova] RemoteError: Remote error: OperationalError (OperationalError) (1048, Column 'instance_uuid' cannot be null)

2015-01-07 Thread Jianbo Zheng
Thanks a lot, dude.
Update the controller node and compute nodes solved my problem.

Regards,
Jianbo Zheng
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-07 Thread David Stanek
It's also important to remember that IRC channels are typically not private
and are likely already logged by dozens of people anyway.

On Tue, Jan 6, 2015 at 1:22 PM, Christopher Aedo ca...@mirantis.com wrote:

 On Tue, Jan 6, 2015 at 2:49 AM, Flavio Percoco fla...@redhat.com wrote:
  Fully agree... I don't see how enable logging should be a limitation
  for freedom of thought. We've used it in Zaqar since day 0 and it's
  bee of great help for all of us.
 
  The logging does not remove the need of meetings where decisions and
  more relevant/important topics are discussed.

 Wanted to second this as well.  I'm strongly in favor of logging -
 looking through backlogs of chats on other channels has been very
 helpful to me in the past, and it sure to help others in the future.
 I don't think there is danger of anyone pointing to a logged IRC
 conversation in this context as some statement of record.

 -Christopher

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api]do we need to have a spec for all api related changes?

2015-01-07 Thread Matt Riedemann



On 1/7/2015 11:16 AM, Joe Gordon wrote:



On Tue, Jan 6, 2015 at 7:43 PM, Eli Qiao ta...@linux.vnet.ibm.com
mailto:ta...@linux.vnet.ibm.com wrote:

hi all:
I have a patch [1], just did slight changes on api, do I need to
write a spec(kinds of wasting time to get approved)?
since api-microversion[2] is almost done, can we just feel free to
add changes as micro-version api?
like bump version , write down changes in rest_api_version_history.rst


We should always be careful about making API changes, so all API changes
need a spec.

http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html#kilo

[1] https://review.openstack.org/#/c/144914/
[2]

https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/api-microversions,n,z

--
Thanks,
Eli (Li Yong) Qiao


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, here is another example [1].

[1] https://review.openstack.org/#/c/125471/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] [Heat] validating properties of Sahara resources in Heat

2015-01-07 Thread Andrew Lazarev
Answers inlined and marked as [AL].

On Mon, Jan 5, 2015 at 5:17 AM, Pavlo Shchelokovskyy 
pshchelokovs...@mirantis.com wrote:

 Hi all,

 I would like to ask Sahara developers' opinion on two bugs raised against
 Heat's resources - [1] and [2].
 Below I am going to repeat some of my comments from those bugs and
 associated Gerrit reviews [3] to have the conversation condensed here in ML.

 In Heat's Sahara-specific resources we have such properties as
 floating_ip_pool for OS::Sahara::NodeGroupTemplate [4]
 and neutron_management_network for both OS::Sahara::ClusterTemplate [5]
 and OS::Sahara::Cluster [6].
 My questions are about when and under which conditions those properties
 are required to successfully start a Sahara Cluster.

 floating_ip_pool:

 I was pointed that Sahara could be configured to use netns/proxy to access
 the cluster VMs instead of floating IPs.

 My questions are:
 - Can that particular configuration setting (netns/proxy) be assessed via
 saharaclient?


[AL] No, settings are configured in sahara.conf and hardly to be checked
outside of sahara.


 - What would be the result of providing floating_ip_pool when Sahara is
 indeed configured with netns/proxy?

  Is it going to function normally, having just wasted several floating IPs
 from quota?


[AL] It will assign floating IP as requested. Floating IP could be used not
only for management by Sahara, but for other purposes too. User could
request to assign floating IP.


 - And more crucial, what would happen if Sahara is _not_ configured to use
 netns/proxy and not provided with floating_ip_pool?
   Can that lead to cluster being created (at least VMs for it spawned) but
 Sahara would not be able to access them for configuration?
   Would Sahara in that case kill the cluster/shutdown VMs or hang in some
 cluster failed state?


[AL] Sahara will return validation error on attempt to create cluster. No
resources will be created.

neutron_management_network:
 I understand the point that it is redundant to use it in both resources
 (although we are stuck with deprecation period as those are part of Juno
 release already).


[AL] neutron_management_network must be specified somewhere in case of
neutron. It could be either template OR cluster. No need to specify it in
both places.



 Still, my questions are:
 - would this property passed during creation of Cluster override the one
 passed during creation of Cluster Template?


[AL] Yes, Sahara looks to template only when no value provided in cluster
request.


 - what would happen if I set this property (pass it via saharaclient) when
 Nova-network is in use?


[AL] Validation error will be returned


 - what if I _do not_ pass this property and Neutron has several networks
 available?


[AL] Validation error will be returned even if only one neutron network
available. Sahara currently doesn't support automatic network selection
(could be a nice feature).


 The reason I'm asking is that in Heat we try to follow fail-fast
 approach, especially for billable resources,
 to avoid situation when a (potentially huge) stack is being created and
 breaks on last or second-to-last resource,
 leaving user with many resources spawned (even if for a short time if the
 stack rollback is enabled)
 which might cost a hefty sum of money for nothing. That is why we are
 trying to validate the template
 as thoroughly as we can before starting to create any actual resources in
 the cloud.

 Thus I'm interested in finding the best possible (or least-worse)
 cover-it-all strategy
 for validating properties being set for these resources.

 [1] https://bugs.launchpad.net/heat/+bug/1399469
 [2] https://bugs.launchpad.net/heat/+bug/1402844
 [3] https://review.openstack.org/#/c/141310
 [4]
 https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_templates.py#L136
 [5]
 https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_templates.py#L274
 [6]
 https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_cluster.py#L79

 Best regards,

 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Is anyone working on the following patch?

2015-01-07 Thread Morgan Fainberg
I think I was just piling on your patch set there. Let me 2x check but that 
might just need a rebase now. 

--Morgan 

Sent via mobile

 On Jan 7, 2015, at 12:46, dsta...@dstanek.com wrote:
 
 Hmmm... Might want to check with Morgan to make sure we want that. I thought 
 he had (and probably merged) an alternative. Otherwise I'm fine if you want 
 to pick it up and rebase it.
 
 —
 Sent from Mailbox
 
 
 On Wed, Jan 7, 2015 at 2:48 PM, Dolph Mathews dolph.math...@gmail.com 
 wrote:
 
 On Wed, Jan 7, 2015 at 10:32 AM, Lance Bragstad lbrags...@gmail.com wrote:
 https://review.openstack.org/#/c/113586/ is owned by dstanek but I 
 understand he is out this week at a conference?
 
 Correct.
  
 It might be worth dropping in #openstack-keystone and seeing if dstanek 
 would be alright with you picking it up, since you're building on it.
 
 I CC'd him here, as I figure async communication might be easier for him if 
 he's mostly AFK.
  
 
 On Wed, Jan 7, 2015 at 12:21 AM, Ajaya Agrawal ajku@gmail.com wrote:
 https://review.openstack.org/#/c/113586/
 
 Two of my patches depend on this patch.
 https://review.openstack.org/#/c/113277/
 https://review.openstack.org/#/c/110575/
 
 
 Cheers,
 Ajaya
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RemoteError: Remote error: OperationalError (OperationalError) (1048, Column 'instance_uuid' cannot be null)

2015-01-07 Thread Jianbo Zheng
Hi there,

I have the same issue on my second and third compute nodes. The first
compute node is working properly with the controller node.
My environment is Juno + RHEL7.0.

I don't think the issue from the new instance not having an uuid.
Actually from my opinion, in the log, 67e215e0-2193-439d-89c4-be8c378df78d
is the uuid, right?

2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance:
67e215e0-2193-439d-89c4-be8c378df78d] [u'Traceback (most recent call
last):\n', u'  File .

​Any suggestion on solving this issue?  ​

Regards,
Jianbo Zheng
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2015-01-07 Thread Andrew Woodward
On Wed, Jan 7, 2015 at 12:59 AM, Przemyslaw Kaminski
pkamin...@mirantis.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Hello,

 The updated version of monitoring code is available here:

 https://review.openstack.org/#/c/137785/

 This is based on monit as was agreed in this thread. The drawback of
 monit is that basically it's a very simple system that doesn't track
 state of checkers so still some Python code is needed so that user
 isn't spammed with low disk space notifications every minute.

can we make the alert an asserted state that needs to be cleared to
remove the warning? that way once asserted it won't re-raise the
error.


 On 01/05/2015 10:40 PM, Andrew Woodward wrote:
 There are two threads here that need to be unraveled from each
 other.

 1. We need to prevent fuel from doing anything if the OS is out of
 disk space. It leads to a very broken database from which it
 requires a developer to reset to a usable state. From this point we
 need to * develop a method for locking down the DB writes so that
 fuel becomes RO until space is freed

 It's true that full disk space + DB writes can result in fatal
 database failure. I just don't know if we can lock the DB just like
 that? What if deployment is in progress?

We could do some form of complicated maths around guessing how much
space we need to finish a task but lets say at
20% free space we warn
5% free space we block tasks from starting
(both should be configurable, and probably ignore-able )

then we also need to have a separate volume for the DB from the logs.
This will remove the need do any complicated logic around blocking in
the DB

 I think the first way to reduce disk space usage would be to set
 logging level to WARNING instead of DEBUG. It's good to have DEBUG
 during development but I don't think it's that good for production.
 Besides it slows down deployment much, from what I observed.

The default logging level is supposed to be WARNING not debug.


 * develop a method (or re-use existing) to notify the user that a
 serious error state exists on the host. ( that could not be
 dismissed)

 Well this is done already in the review I've linked above. It
 basically posts a notification to the UI system. Everything still
 works as before though until the disk is full. The CLI doesn't
 communicate in any way with notifications AFAIK so the warning is not
 shown there.

 * we need some API that can lock / unlock the DB * we need some
 monitor process that will trigger the lock/unlock

 This one can be easily changed with the code in the above review request.

I think this should become blocking tasks, not the db its self as above


 2. We need monitoring for the master node and fuel components in
 general as discussed at length above. unless we intend to use this
  to also monitor the services on deployed nodes (likely bad), then
  what we use to do this is irrelevant to getting this started. If
 we are intending to use this to also monitor deployed nodes, (again
 bad for the fuel node to do) then we need to standardize with what
 we monitor the cloud with (Zabbix currently) and offer a single
 pane of glass. Federation in the monitoring becomes a critical
 requirement here as having more than one pane of glass is an
 operations nightmare.

 AFAIK installation of Zabbix is optional. We want obligatory
 monitoring of the master which would somehow force its installation on
 the cloud nodes.

 P.


 Completing #1 is very important in the near term as I have had to
 un-brick several deployments over it already. Also, in my mind
 these are also separate tasks.

 On Thu, Nov 27, 2014 at 1:19 AM, Simon Pasquier
 spasqu...@mirantis.com wrote:
 I've added another option to the Etherpad: collectd can do basic
  threshold monitoring and run any kind of scripts on alert
 notifications. The other advantage of collectd would be the RRD
 graphs for (almost) free. Of course since monit is already
 supported in Fuel, this is the fastest path to get something
 done. Simon

 On Thu, Nov 27, 2014 at 9:53 AM, Dmitriy Shulyak
 dshul...@mirantis.com wrote:

 Is it possible to send http requests from monit, e.g for
 creating notifications? I scanned through the docs and found
 only alerts for sending mail, also where token (username/pass)
  for monit will be stored?

 Or maybe there is another plan? without any api interaction

 On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski
 pkamin...@mirantis.com wrote:

 This I didn't know. It's true in fact, I checked the
 manifests. Though monit is not deployed yet because of lack
 of packages in Fuel ISO. Anyways, I think the argument about
  using yet another monitoring service is now rendered
 invalid.

 So +1 for monit? :)

 P.


 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

 Monit is easy and is used to control states of Compute nodes.
 We can adopt it for master node.

 -- Best regards, Sergii Golovatiuk, Skype #golserge IRC
 #holser

 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw 

Re: [openstack-dev] [Manila]Rename driver mode

2015-01-07 Thread Li, Chen
Update my proposal again:

As a new bird for manila, I start using/learning manila with generic driver. 
When I reached driver mode,I became really confuing, because I can't stop 
myself jump into ideas:   share server == nova instance   svm == share 
virtual machine == nova instance.

Then I tried glusterFS, it is working under single_svm_mode, I asked why it 
is single mode, the answer I get is  This is approach without usage of 
share-servers  ==  without using share-servers, then why single ??? 
More confusing ! :(


Now I know, the mistake I made is ridiculous.
Great thanks to vponomaryov  ganso, they made big effort helping me to figure 
out why I'm wrong.


But, I don't think I'm the last one person making this mistake.
So, I hope we can change the driver mode name less confusing and more easy to 
understand.


First, svm should be removed, at least change it to ss (share-server), make 
it consistent with share-server.
I don't like single/multi, because that makes me think of numbers of 
share-servers, makes me want to ask: if I create a share, that share need 
multi share-servers ? why ?

Also, when I trying glusterFS (installed it following 
http://www.gluster.org/community/documentation/index.php/QuickStart), when I 
testing the GlusterFS volume, it said: use one of the servers to mount the 
volume. Isn't that means using any server in the cluster can work and their 
work has no difference. So, is there a way to change glusterFS driver to add 
more than one glusterfs_target, and all glusterfs_targets are replications 
for each other. Then when manila create a share, chose one target to use. This 
would distribute data traffic to the cluster, higher bandwidth, higher 
performance, right ? == This is single_svm_mode, but obviously not single.


vponomaryov  ganso suggested basic_mode and advanced_mode, but I think 
basic/advanced is more driver perspective concept. Different driver might 
already have its own concept of basic advanced, beyong manila scope. This would 
make admin  driver programmer confusing.

As single_svm_mode indicate driver just have information about where to 
go and how, it is gotten by config opts and some special actions of drivers 
while multi_svm_mode need to create where and how with infomation.

My suggestion is
   single_svm_mode == static_mode
   multi_svm_mode  == dynamic_mode.

As where to go and how are static under single_svm_mode, but 
dynamically create/delete by manila under multi_svm_mode.


Also, about the share-server concept.

share-server is a tenant point of view concept, it does not know if it is a 
VM or a dedicated hardware outside openstack because it is not visible to the 
tenant.
Each share has its own share-server, no matter how it get(get from 
configuration under single_svm_mode, get from manila under multi_svm_mode).

I get the wrong idea that about glusterFS has no share server based on 
https://github.com/openstack/manila/blob/master/manila/share/manager.py#L238, 
without reading driver code, isn't this saying: I create share without 
share-server. But, the truth is just share-server is not handled by manila, 
doesn't mean it not exist. E.g. in glusterFS, the share-server is 
self.gluster_address.

So, I suggest to edit ShareManager code to get share_server before create_share 
based on driver mode.
Such as:
http://paste.openstack.org/show/155930/

This would affect all drivers, but I think it is worth for long term 
perspective.

Hope to hear from you guys.

Thanks.
-chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Documentation Review Inbox

2015-01-07 Thread Dmitry Borodaenko
I've put together a new review dashboard for Fuel documentation:
https://github.com/angdraug/gerrit-dash-creator/blob/fuel-docs-dashboard/dashboards/fuel-docs.dash

You can find the link generated from this source file under:
https://wiki.openstack.org/wiki/Fuel#Development_related_links
(it's too long to paste here)

I think we should also create a separate dashboard for fuel-specs, and
exclude both repos from the primary Fuel dashboard.

Thoughts?

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ThirdPartyCI][PCI] Intel Third party Hardware based CI for PCI

2015-01-07 Thread yongli he

Hi,

Intel  set up a Hardware based Third Part CI.   it's already running 
sets of PCI test cases

for several  weeks (do not sent out comments, just log the result)
the log server and these test cases seems fairly stable now.   to begin 
given comments  to nova

repository,  what other necessary work need to be address?

Details:
1. ThirdPartySystems https://wiki.openstack.org/wiki/ThirdPartySystems 
Information

https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-PCI-CI

2. a sample logs:
http://192.55.68.190/143614/6/ cid:part2.01090706.06060904@intel.com

http://192.55.68.190/143614/6/

http://192.55.68.190/139900/4

http://192.55.68.190/143372/3/

http://192.55.68.190/141995/6/

http://192.55.68.190/137715/13/

http://192.55.68.190/133269/14/

3. Test cases on github:
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases



Thanks
Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][AdvancedServices] Confusion about the solution of the service chaining!

2015-01-07 Thread Sumit Naiksatam
Hi Alan,

Responses inline...

On Wed, Jan 7, 2015 at 4:25 AM,  lv.erc...@zte.com.cn wrote:
 Hi,

 I want to confirm that how is the project about Neutron Services Insertion,
 Chaining, and Steering going, I found that all the code implementation
 about service insertion、service chaining and traffic steering list in
 JunoPlan were Abandoned .

 https://wiki.openstack.org/wiki/Neutron/AdvancedServices/JunoPlan

 and I also found that we have a new project about GBP and
 group-based-policy-service-chaining be located at:

 https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-abstraction

 https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

 so I'm confused with solution of the service chaining.


Yes, the above two blueprints have been implemented and are available
for consumption today as a part of the Group-based Policy codebase and
release. The GBP model uses a policy trigger to drive the service
composition and can accommodate different rendering policies like
realization using NFV SFC.

 We are developing the service chaining feature, so we need to know which one
 is the neutron's choice.

It would be great if you can provide feedback on the current
implementation, and perhaps participate and contribute as well.

 Are the blueprints about the service insertion,
 service chaining and traffic steering list in JunoPlan all Abandoned ?


Some aspects of this are perhaps a good fit in Neutron and others are
not. We are looking forward to continuing the discussion on this topic
on the areas which are potentially a good fit for Neutron (we have had
this discussion before as well).

 BR
 Alan



 
 ZTE Information Security Notice: The information contained in this mail (and
 any attachment transmitted herewith) is privileged and confidential and is
 intended for the exclusive use of the addressee(s).  If you are not an
 intended recipient, any disclosure, reproduction, distribution or other
 dissemination or use of the information contained is strictly prohibited.
 If you have received this mail in error, please delete it and notify us
 immediately.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][AdvancedServices] Confusion about the solution of the service chaining!

2015-01-07 Thread Kyle Mestery
On Wed, Jan 7, 2015 at 6:25 AM, lv.erc...@zte.com.cn wrote:

 Hi,

 I want to confirm that how is the project about Neutron Services
 Insertion, Chaining, and Steering going, I found that all the code
 implementation about service insertion、service chaining and traffic
 steering list in JunoPlan were Abandoned .

 https://wiki.openstack.org/wiki/Neutron/AdvancedServices/JunoPlan

 and I also found that we have a new project about GBP and
 group-based-policy-service-chaining be located at:


 https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-abstraction


 https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

 so I'm confused with solution of the service chaining.

 We are developing the service chaining feature, so we need to know which
 one is the neutron's choice. Are the blueprints about the service
 insertion, service chaining and traffic steering list in JunoPlan all
 Abandoned ?

 Service chaining isn't in the plan for Kilo [1], but I expect it to be
something we talk about in Vancouver for the Lxxx release. The NFV/Telco
group has been talking about this as well. I'm hopeful we can combine
efforts and come up with a coherent service chaining solution that solves a
handful of useful use cases during Lxxx.

Thanks,
Kyle

[1]
http://specs.openstack.org/openstack/neutron-specs/priorities/kilo-priorities.html

 BR
 Alan



 
 ZTE Information Security Notice: The information contained in this mail (and 
 any attachment transmitted herewith) is privileged and confidential and is 
 intended for the exclusive use of the addressee(s).  If you are not an 
 intended recipient, any disclosure, reproduction, distribution or other 
 dissemination or use of the information contained is strictly prohibited.  If 
 you have received this mail in error, please delete it and notify us 
 immediately.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-07 Thread Kyle Mestery
On Wed, Jan 7, 2015 at 8:21 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Hi all,

 I've found out that dnsmasq  2.67 does not work properly for IPv6 clients
 when it comes to MAC address matching (it fails to match, and so clients
 get 'no addresses available' response). I've requested version bump to 2.67
 in: https://review.openstack.org/145482

 Good catch, thanks for finding this Ihar!


 Now, since we've already released Juno with IPv6 DHCP stateful support,
 and DHCP agent still has minimal version set to 2.63 there, we have a
 dilemma on how to manage it from stable perspective.

 Obviously, we should communicate the revealed version dependency to
 deployers via next release notes.

 Should we also backport the minimal version bump to Juno? This will result
 in DHCP agent failing to start in case packagers don't bump dnsmasq version
 with the next Juno release. If we don't bump the version, we may leave
 deployers uninformed about the fact that their IPv6 stateful instances
 won't get any IPv6 address assigned.

 An alternative is to add a special check just for Juno that would WARN
 administrators instead of failing to start DHCP agent.

 Comments?

 Personally, I think the WARN may be the best route to go. Backporting a
change which bumps the required dnsmasq version seems like it may be harder
for operators to handle.

Kyle


 /Ihar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hierarchical Multitenancy quotas

2015-01-07 Thread Tim Bell
Are we yet at the point  in the New Year to register requests for exceptions ?

There is strong interest from CERN and Yahoo! In this feature and there are 
many +1s and no unaddressed -1s.

Thanks for consideration,

Tim

 Joe wrote
 ….

Nova's spec deadline has passed, but I think this is a good candidate for an 
exception.  We will announce the process for asking for a formal spec 
exception shortly after new years.


From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: 23 December 2014 19:02
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Hierarchical Multitenancy

Joe,

Thanks… there seems to be good agreement on the spec and the matching 
implementation is well advanced with BARC so the risk is not too high.

Launching HMT with quota in Nova in the same release cycle would also provide a 
more complete end user experience.

For CERN, this functionality is very interesting as it allows the central cloud 
providers to delegate the allocation of quotas to the LHC experiments. Thus, 
from a central perspective, we are able to allocate N thousand cores to an 
experiment and delegate their resource co-ordinator to prioritise the work 
within the experiment. Currently, we have many manual helpdesk tickets with 
significant latency to adjust the quotas.

Tim

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 23 December 2014 17:35
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Hierarchical Multitenancy


On Dec 23, 2014 12:26 AM, Tim Bell 
tim.b...@cern.chmailto:tim.b...@cern.ch wrote:



 It would be great if we can get approval for the Hierachical Quota handling 
 in Nova too (https://review.openstack.org/#/c/129420/).

Nova's spec deadline has passed, but I think this is a good candidate for an 
exception.  We will announce the process for asking for a formal spec exception 
shortly after new years.




 Tim



 From: Morgan Fainberg 
 [mailto:morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com]
 Sent: 23 December 2014 01:22
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Hierarchical Multitenancy



 Hi Raildo,



 Thanks for putting this post together. I really appreciate all the work you 
 guys have done (and continue to do) to get the Hierarchical Mulittenancy code 
 into Keystone. It’s great to have the base implementation merged into 
 Keystone for the K1 milestone. I look forward to seeing the rest of the 
 development land during the rest of this cycle and what the other OpenStack 
 projects build around the HMT functionality.



 Cheers,

 Morgan







 On Dec 22, 2014, at 1:49 PM, Raildo Mascena 
 rail...@gmail.commailto:rail...@gmail.com wrote:



 Hello folks, My team and I developed the Hierarchical Multitenancy concept 
 for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we 
 implemented? What are the next steps for kilo?

 To answers these questions, I created a blog post 
 http://raildo.me/hierarchical-multitenancy-in-openstack/



 Any question, I'm available.



 --

 Raildo Mascena

 Software Engineer.

 Bachelor of Computer Science.

 Distributed Systems Laboratory
 Federal University of Campina Grande
 Campina Grande, PB - Brazil



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tooz 0.10 released

2015-01-07 Thread Julien Danjou
Hi fellow OpenStack developers,

The Oslo team is pleased to announce the release of tooz 0.10.

This release includes several bug fixes as well as many other changes:

  a2216e3 Add support for an optional redis-sentinel
  74a8550 README.rst tweaks
  4c29965 A few more documentation tweaks
  9cfe5db Sync requirements to global requirements
  1ac3e83 Add create/join/leave group support in IPC driver
  836fec0 Add driver autogenerated docs
  7b93dc7 Update links + python version supported
  41dab35 zookeeper: add support for delete group
  1cb825d redis: add support for group deletion
  88f0533 tests: minor code simplification
  c07951e memcached: add support for group deletion
  14000cd memcached: add support for _destroy_group
  1b45419 Switch to using oslosphinx
  42914c7 Add doc on how transaction is itself retrying internally
  e9f51e8 Fix .gitreview after rename/transfer
  39c09ed tests: use scenarios attributes for timeout capability
  007e02c tests: check for leave group events on dead members cleanup
  5535184 memcached: delete stale/dead group members on get_members()
  e153202 tests: remove check_port
  910188e tests: do not skip test on connection error
  5d07a4c Ensure 'leave_group' result gotten before further work

For more details, please see the git log history and:
  https://launchpad.net/python-tooz/+milestone/0.10

Please report issues through launchpad:
  https://launchpad.net/python-tooz

Cheers,
-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without stopping sevices

2015-01-07 Thread James Downs

On Jan 6, 2015, at 8:05 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 It would be desirable to be able to be hide an agent from scheduling
 but no one has stepped up to make this happen.  Come to think of it,
 I'm not sure that a bug or blueprint has been filed yet to address it
 though it is something that I've wanted for a little while now.

Something like nova’s “service-disable”?

Cheers,
-j
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Is anyone working on the following patch?

2015-01-07 Thread Lance Bragstad
https://review.openstack.org/#/c/113586/ is owned by dstanek but I
understand he is out this week at a conference?

It might be worth dropping in #openstack-keystone and seeing if dstanek
would be alright with you picking it up, since you're building on it.

On Wed, Jan 7, 2015 at 12:21 AM, Ajaya Agrawal ajku@gmail.com wrote:

 https://review.openstack.org/#/c/113586/

 Two of my patches depend on this patch.
 https://review.openstack.org/#/c/113277/
 https://review.openstack.org/#/c/110575/


 Cheers,
 Ajaya

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] unit test migration failure specific to MySQL/MariaDB - 'uuid': used in a foreign key constraint 'block_device_mapping_instance_uuid_fkey'

2015-01-07 Thread Matt Riedemann



On 1/6/2015 5:40 PM, Mike Bayer wrote:

Hello -

Victor Sergeyev and I are both observing the following test failure which 
occurs with all the tests underneath 
nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.This is against 
master with a brand new tox environment and everything at the default.

It does not seem to be occurring on gates that run these tests and 
interestingly the tests seem to complete very quickly (under seven seconds) on 
the gate as well; the failures here take between 50-100 seconds to occur, not 
fully deterministically, and only on the MySQL backend; the Postgresql and 
SQLite versions of these tests pass.  I’m running against MariaDB server 
10.0.14 with Python 2.7.8 on Fedora 21.

Below is the test just for test_walk_versions, but the warnings (not 
necessarily the failures themselves) here also occur for test_migration_267 as 
well as test_innodb_tables.

I’m still looking into what the cause of this is, I’d imagine it’s something 
related to newer MySQL versions or perhaps MariaDB vs. MySQL, I’m just putting 
it up here in case someone already knows what this is or has some clue to save 
me some time figuring it out.  I apologize if I’m just doing something dumb, 
I’ve only recently begun to run Nova’s test suite in full against all backends, 
so I haven’t yet put intelligent thought into this nor have I tried to yet look 
at the migration in question causing the problem.  Will do that next.


[mbayer@thinkpad nova]$ tox -e py27 -- 
nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions
py27 develop-inst-noop: /home/mbayer/dev/openstack/nova
py27 runtests: PYTHONHASHSEED='0'
py27 runtests: commands[0] | find . -type f -name *.pyc -delete
py27 runtests: commands[1] | bash tools/pretty_tox.sh 
nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions
running testr
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests}  
--load-list /tmp/tmpw7zqhE

2015-01-06 18:28:12.913 32435 WARNING oslo.db.sqlalchemy.session 
[req-5cc6731f-00ef-43df-8aec-4914a44d12c5 ] MySQL SQL mode is '', consider 
enabling TRADITIONAL or STRICT_ALL_TABLES
{0} 
nova.tests.unit.db.test_migrations.TestNovaMigrationsMySQL.test_walk_versions 
[51.553131s] ... FAILED

Captured traceback:
~~~
 Traceback (most recent call last):
   File nova/tests/unit/db/test_migrations.py, line 151, in 
test_walk_versions
 self.walk_versions(self.snake_walk, self.downgrade)
   File 
/home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/test_migrations.py,
 line 193, in walk_versions
 self.migrate_up(version, with_data=True)
   File nova/tests/unit/db/test_migrations.py, line 148, in migrate_up
 super(NovaMigrationsCheckers, self).migrate_up(version, with_data)
   File 
/home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/test_migrations.py,
 line 263, in migrate_up
 self.REPOSITORY, version)
   File 
/home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/migrate/versioning/api.py,
 line 186, in upgrade
 return _migrate(url, repository, version, upgrade=True, err=err, 
**opts)
   File string, line 2, in _migrate
   File 
/home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/migrate/versioning/util/__init__.py,
 line 160, in with_engine
 return f(*a, **kw)
   File 
/home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/migrate/versioning/api.py,
 line 366, in _migrate
 schema.runchange(ver, change, changeset.step)
   File 
/home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/migrate/versioning/schema.py,
 line 93, in runchange
 change.run(self.engine, step)
   File 
/home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/migrate/versioning/script/py.py,
 line 148, in run
 script_func(engine)
   File 
/home/mbayer/dev/openstack/nova/nova/db/sqlalchemy/migrate_repo/versions/267_instance_uuid_non_nullable.py,
 line 103, in upgrade
 process_null_records(meta, scan=False)
   File 
/home/mbayer/dev/openstack/nova/nova/db/sqlalchemy/migrate_repo/versions/267_instance_uuid_non_nullable.py,
 line 89, in process_null_records
 table.columns.uuid.alter(nullable=False)
   File 
/home/mbayer/dev/openstack/nova/.tox/py27/lib/python2.7/site-packages/migrate/changeset/schema.py,
 line 534, in alter
 return alter_column(self, *p, **k)
   File 

[openstack-dev] [QA] Meeting Thursday January 8th at 22:00 UTC

2015-01-07 Thread David Kranz

Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, January 8th at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

It's also worth noting that a few weeks ago we started having a regular
dedicated Devstack topic during the meetings. So if anyone is interested in
Devstack development please join the meetings to be a part of the discussion.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

17:00 EST
07:00 JST
08:30 ACDT
23:00 CET
16:00 CST
14:00 PST

-David Kranz

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Questions about the Octavia project

2015-01-07 Thread Stephen Balukoff
Hello! I'm gonna answer inline as well, based on Phillip's responses.

On Tue, Jan 6, 2015 at 12:33 PM, Phillip Toohill 
phillip.tooh...@rackspace.com wrote:

 Ill answer inline what I can, others can chime in to clear up anything and
 answer the rest.

 On 1/6/15 10:38 AM, Andrew Hutchings and...@linuxjedi.co.uk wrote:

 Hi,
 
 I¹m looking into the Octavia project in relation to something my team are
 working on inside HP and I have a bunch of questions.  I realise it is
 early days for the project and some of these could be too low level at
 this time.
 
 Some of these questions come from the fact that I could not get the
 documentation to compile and the docs site for Octavia is down.  The
 v0.5-component-design.dot file crashes Graphviz 2.38 in every OS I tried
 and unfortunately all my dev machines have that version or 2.36 which is
 too low to render it correctly.  It also requires at least 5 extra
 dependencies (Sphinx modules) to build the docs but doesn¹t try to
 install them.


I'm not sure what to tell you on this: It rendered fine when I wrote it
using 2.37. I'm now using 2.39 and it still renders fine for me and others
on the project. Perhaps (in private e-mail or chat) you could send me your
error output and we can help you troubleshoot it?


 
 I¹ll guess I¹ll start from the most obvious question:
 
 1. Octavia looks a lot like Libra but with integration into Neutron and
 Barbican (both were planned for Libra) as well as few other changes.  So
 the most obvious question is: why not just develop Libra for integration
 with Neutron?
 There was many discussions with many contributors that included HP,
 Rackspace, Bluebox A10 etc.. In regards to this decision. In the docs we
 should have links to the reasonings behind some of these.


Phillip is right about this. It's also been many months since we discussed
this, but I seem to recall that nobody working on the project was in favor
of starting with the Libra code. Rather, we're building this from the
ground up, perhaps utilizing some of the lessons learned in Libra, but also
with the intent to not repeat mistakes made with Libra. If you'd like to
think of Octavia as next generation Libra that's fine, but they really
are separate projects. Please also note that this is a fruitless
discussion: Nobody working on Octavia is interested in going back on the
months of discussion, design, and work we've done on Octavia and starting
from the Libra source code at this time.


 
 Amphorae stuff:
 
 2. I see a lot of building blocks for the controller and Amphorae but not
 a lot about communication.  What protocol / method is to be used to
 communicate to the Amphorae instances?
 In the docs/specs the communication protocols are defined.


..and they usually end up being RESTful when it's intended for components
to run on separate machines (virtual or otherwise). As far as what each of
these internal APIs look like: Some have been defined in separate specs,
some are still under review, some have yet to be defined.


 3. How are Amphorae instances to be spun up on-demand?  I see a reference
 to Heat but not sure if that is why it is there

 The specs define how this is to happen


The reference to heat has to do with auto-scaling, which is something that
will come into play a lot later in the development process, probably after
Octavia v2.0. Even when this is in place, though, Octavia contains a
controller which interfaces with Nova and Neutron to accomplish spinning up
amphorae as needed. And yes, these are defined in separate specs.



 4. There is mention of Docker in some of the deploy scripts.  Is this for
 multi-tenancy or just separation of the Amphorae processes?


Docker (and the use of containers) is envisioned as one possible way to
generate amphora (as opposed to a pure VM). There is no plan at this time
to allow for multi-tenancy on a single amphora. Initially, each amphora
will also service only one loadbalancer.


 5. I take it Amphorae is designed to be single-AZ for now?


Correct. Multi-AZ is something that will be discussed probably after the
v1.0 release.


 
 Load Balancing:
 
 6. It seems like you are going to have SSL termination support and are
 going to use HAProxy, which means that you will have unencrypted data
 between the LB and web servers.  How do you plan to work around this
 problem?
 Not sure what the 'problem' is, ultimately its up to the user, but a
 private network can be configured between the LB and Web server


Yes, again for some this is not a problem. In the context of Neutron LBaaS
(which Octavia will be using as its user API), there has been some
discussion of encrypting traffic between the thingy providing load
balancing and the back-end pool member machines. So far, though, nobody
working on the project has had this as a show-stopper requirement-- and
everyone working on it would simply like to see front-end TLS termination
delivered first (ie. walk before running).


 
 Security:
 
 7. Someone in the 

Re: [openstack-dev] [Octavia] Questions about the Octavia project

2015-01-07 Thread Stephen Balukoff
Hi Andrew,

More responses inline:

On Wed, Jan 7, 2015 at 12:11 AM, Andrew Hutchings and...@linuxjedi.co.uk
wrote:

 Hi Phillip,

 Thanks for your response.

  On 6 Jan 2015, at 20:33, Phillip Toohill phillip.tooh...@rackspace.com
 wrote:
 
  Ill answer inline what I can, others can chime in to clear up anything
 and
  answer the rest.

 The reason I asked the questions I did is because I can’t find any OS the
 docs will actually compile on and it is difficult to find these answers
 trawling throw .dot and .rst files.  I’ve since found answers for a couple
 of them.

 I have several recommendations based on what I have read so far.  Such as
 not using Protobufs instead of JSON for the Amphorae-Controller
 configuration communication (I can go into lots of detail into why another
 time).  I very much like the HMAC-signed UDP messages idea though.


I think we defaulted to JSON because it's a well-understood way of
serializing data for use in a RESTful interface. I'm not familiar with
protobufs, and am willing to hear you out on reasons we should use it
instead of JSON-- but do note that what you're seeing is the result of some
hard-won compromises after extensive discussion, and we're *finally* (after
several months of this) getting to the point where we can
divide-and-conquer on this problem because we're achieving clarity and
consensus on what the components should be and how they should interface.
We're going to be resistant to changing certain details precisely because
we don't want to re-open cans of worms that we're just now getting sealed
shut-- so unless you've got some *really* compelling reasons here, we're
unlikely to want to change things at this juncture.



 I now have some feedback for my team, thanks again.

 Kind Regards
 --
 Andrew Hutchings - LinuxJedi - http://www.linuxjedi.co.uk/




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Questions about the Octavia project

2015-01-07 Thread Andrew Hutchings
Hi Stephen,

Thanks for taking the time to write both responses.


 On 7 Jan 2015, at 19:27, Stephen Balukoff sbaluk...@bluebox.net wrote:
 
 I think we defaulted to JSON because it's a well-understood way of 
 serializing data for use in a RESTful interface. I'm not familiar with 
 protobufs, and am willing to hear you out on reasons we should use it instead 
 of JSON-- but do note that what you're seeing is the result of some hard-won 
 compromises after extensive discussion, and we're *finally* (after several 
 months of this) getting to the point where we can divide-and-conquer on this 
 problem because we're achieving clarity and consensus on what the components 
 should be and how they should interface. We're going to be resistant to 
 changing certain details precisely because we don't want to re-open cans of 
 worms that we're just now getting sealed shut-- so unless you've got some 
 *really* compelling reasons here, we're unlikely to want to change things at 
 this juncture.

I’ve pretty much summarised the reasons here at the following URL but I 
understand the reasons for sticking with JSON: 
http://linuxjedi.co.uk/posts/2014/Oct/31/why-json-is-bad-for-applications/

Kind Regards
--
Andrew Hutchings - LinuxJedi - http://www.linuxjedi.co.uk/





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Should region name be case insensitive?

2015-01-07 Thread James Downs

On Jan 7, 2015, at 12:59 AM, Denis Makogon dmako...@mirantis.com wrote:

 
 Hello Zhou.
 
 On Wed, Jan 7, 2015 at 10:39 AM, Zhou, Zhenzan zhenzan.z...@intel.com wrote:
 Hi,
 
  
 
 I meet such an issue when using glance/nova client deployed with Devstack to 
 talk with a cloud deployed with TripleO:
 
  
 
 [minicloud@minicloud allinone]$ glance image-list
 
 public endpoint for image service in RegionOne region not found
 
  
 
 Both glance/nova python client libraries allows users to specify region name 
 (see http://docs.openstack.org/user-guide/content/sdk_auth_nova.html). So, 
 you are free to metion any region you want.

That’s true, but the OP was asking whether the region name should be case 
sensitive or not. 

I think it probably makes sense that regionOne should be the same as RegionONE, 
or RegionOne.

Cheers,
-j___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][neutron] minimal dnsmasq version

2015-01-07 Thread Ihar Hrachyshka

Hi all,

I've found out that dnsmasq  2.67 does not work properly for IPv6 
clients when it comes to MAC address matching (it fails to match, and so 
clients get 'no addresses available' response). I've requested version 
bump to 2.67 in: https://review.openstack.org/145482


Now, since we've already released Juno with IPv6 DHCP stateful support, 
and DHCP agent still has minimal version set to 2.63 there, we have a 
dilemma on how to manage it from stable perspective.


Obviously, we should communicate the revealed version dependency to 
deployers via next release notes.


Should we also backport the minimal version bump to Juno? This will 
result in DHCP agent failing to start in case packagers don't bump 
dnsmasq version with the next Juno release. If we don't bump the 
version, we may leave deployers uninformed about the fact that their 
IPv6 stateful instances won't get any IPv6 address assigned.


An alternative is to add a special check just for Juno that would WARN 
administrators instead of failing to start DHCP agent.


Comments?

/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2015-01-07 Thread Robert Li (baoli)
Hi Joe and others,

One of the topics for tomorrow’s NOVA IRC is about k-2 spec exception process. 
I’d like to bring up the following specs up again for consideration:

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads


Thanks for your kindly consideration.

—Robert

On 12/22/14, 1:20 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Fri, Dec 19, 2014 at 6:53 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi Joe,

See this thread on the SR-IOV CI from Irena and Sandhya:

http://lists.openstack.org/pipermail/openstack-dev/2014-November/050658.html

http://lists.openstack.org/pipermail/openstack-dev/2014-November/050755.html

I believe that Intel is building a CI system to test SR-IOV as well.

Thanks for the clarification.


Thanks,
Robert


On 12/18/14, 9:13 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi,

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Neutron-Network settings support for vnic-type
https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

Now that the specs deadline is approaching, I’d like to bring them up in here 
for exception considerations. A lot of works have been put into them. And we’d 
like to see them get through for Kilo.

We haven't started the spec exception process yet.


Regarding CI for PCI passthrough and SR-IOV, see the attached thread.

Can you share this via a link to something on 
http://lists.openstack.org/pipermail/openstack-dev/


thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Should region name be case insensitive?

2015-01-07 Thread Ben Nemec
On 01/07/2015 06:18 AM, Sean Dague wrote:
 On 01/07/2015 06:36 AM, James Downs wrote:

 On Jan 7, 2015, at 12:59 AM, Denis Makogon dmako...@mirantis.com
 mailto:dmako...@mirantis.com wrote:


 Hello Zhou.

 On Wed, Jan 7, 2015 at 10:39 AM, Zhou, Zhenzan zhenzan.z...@intel.com
 mailto:zhenzan.z...@intel.com wrote:

 Hi, 

 __ __

 I meet such an issue when using glance/nova client deployed with
 Devstack to talk with a cloud deployed with TripleO:

 __ __

 [minicloud@minicloud allinone]$ glance image-list

 public endpoint for image service in RegionOne region not found

 __ 

 Both glance/nova python client libraries allows users to specify
 region name
 (see http://docs.openstack.org/user-guide/content/sdk_auth_nova.html).
 So, you are free to metion any region you want.

 That’s true, but the OP was asking whether the region name should be
 case sensitive or not. 

 I think it probably makes sense that regionOne should be the same as
 RegionONE, or RegionOne.
 
 The general standard in OpenStack has been case sensitivity. There are
 performance and security implications on case insensitive environments.
 
 It just sounds like tripleo is using a bad default here, and that's what
 should be addressed.

I agree that it's somewhat unfortunate tripleo is using a different
default region name than devstack, but at the same time I wouldn't
expect to be able to use a devstack rc file to talk to a tripleo cloud.
 Tripleo has its own rc files to be used for that.

 
   -Sean
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][AdvancedServices] Confusion about the solution of the service chaining!

2015-01-07 Thread lv . erchun
Hi,
I want to confirm that how is the project about Neutron Services 
Insertion, Chaining, and Steering going, I found that all the code 
implementation about service insertion、service chaining and traffic 
steering list in JunoPlan were Abandoned .
https://wiki.openstack.org/wiki/Neutron/AdvancedServices/JunoPlan

and I also found that we have a new project about GBP and 
group-based-policy-service-chaining be located at:
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-abstraction
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

so I'm confused with solution of the service chaining.

We are developing the service chaining feature, so we need to know which 
one is the neutron's choice. Are the blueprints about the service 
insertion, service chaining and traffic steering list in JunoPlan all 
Abandoned ? 

BR
Alan



ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] serial-console *replaces* console-log file?

2015-01-07 Thread Markus Zoeller
The blueprint serial-ports introduced a serial console connection
to an instance via websocket. I'm wondering 
* why enabling the serial console *replaces* writing into log file [1]?
* how one is supposed to retrieve the boot messages *before* one connects?

The replacement of the log file has impact on the os-console-output
API [2]. The CLI command `nova console-log instance-name` shows:
ERROR (ClientException): The server has either erred or is incapable
of performing the requested operation. (HTTP 500)
Horizon shows in its Log tab of an instance
Unable to get log for instance uuid.

Would it be good to have both, the serial console *and* the console log
file?


[1] 
https://review.openstack.org/#/c/113960/14/nova/virt/libvirt/driver.py,cm
[2] 
http://developer.openstack.org/api-ref-compute-v2-ext.html#ext-os-console-output


Regards,
Markus Zoeller
IRC: markus_z
Launchpad: mzoeller


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] shelved_offload_time configuration

2015-01-07 Thread Joe Gordon
On Mon, Dec 22, 2014 at 10:36 PM, Kekane, Abhishek 
abhishek.kek...@nttdata.com wrote:

  Hi All,



 AFAIK, for shelve api the parameter shelved_offload_time need to be
 configured on compute node.

 Can we configure this parameter on controller node as well.


Not 100% sure what you are asking but hopefully this will clarify things:
nova.conf files are read locally, so setting the value on a controller node
doesn't affect any compute nodes.




 Please suggest.



 Thank You,



 Abhishek Kekane

 __
 Disclaimer: This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api]do we need to have a spec for all api related changes?

2015-01-07 Thread Joe Gordon
On Tue, Jan 6, 2015 at 7:43 PM, Eli Qiao ta...@linux.vnet.ibm.com wrote:

  hi all:
 I have a patch [1], just did slight changes on api, do I need to write a
 spec(kinds of wasting time to get approved)?
 since api-microversion[2] is almost done, can we just feel free to add
 changes as micro-version api?
 like bump version , write down changes in rest_api_version_history.rst


We should always be careful about making API changes, so all API changes
need a spec.

http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html#kilo


 [1] https://review.openstack.org/#/c/144914/
 [2]
 https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/api-microversions,n,z

 --
 Thanks,
 Eli (Li Yong) Qiao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Etherpad for the Keystone midcycle meetup

2015-01-07 Thread Adam Young

On 01/06/2015 09:28 PM, Steve Martinelli wrote:

Howdy,

https://etherpad.openstack.org/p/kilo-keystone-midcycle 
https://etherpad.openstack.org/p/kilo-nova-midcycleis the etherpad
for the Keystone midcycle meetup. Similar to the Nova team, we should 
start

collecting topics to cover, and sort them by priority later once we've
collected a few.

Steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
That is the Nova one:  the text shows keystone, but the underlying 
hyperlink is:


https://etherpad.openstack.org/p/kilo-nova-midcycle


Click on https://etherpad.openstack.org/p/kilo-keystone-midcycle instead
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Switching CI back to amd64

2015-01-07 Thread Clint Byrum
Excerpts from Derek Higgins's message of 2015-01-07 02:51:41 -0800:
 Hi All,
 I intended to bring this up at this mornings meeting but the train I
 was on had no power sockets (and I had no battery) so sending to the
 list instead.
 
 We currently run our CI with on images built for i386, we took this
 decision a while back to save memory ( at the time is allowed us to move
 the amount of memory required in our VMs from 4G to 2G (exactly where in
 those bands the hard requirements are I don't know)
 
 Since then we have had to move back to 3G for the i386 VM as 2G was no
 longer enough so the saving in memory is no longer as dramatic.
 
 Now that the difference isn't as dramatic, I propose we switch back to
 amd64 (with 4G vms) in order to CI on what would be closer to a
 production deployment and before making the switch wanted to throw the
 idea out there for others to digest.
 
 This obviously would impact our capacity as we will have to reduce the
 number of testenvs per testenv hosts. Our capacity (in RH1 and roughly
 speaking) allows us to run about 1440 ci jobs per day. I believe we can
 make the switch and still keep capacity above 1200 with a few other changes
 1. Add some more testenv hosts, we have 2 unused hosts at the moment and
 we can probably take 2 of the compute nodes from the overcloud.
 2. Kill VM's at the end of each CI test (as opposed to leaving them
 running until the next CI test kills them), allowing us to more
 successfully overcommit on RAM
 3. maybe look into adding swap on the test env hosts, they don't
 currently have any, so over committing RAM is a problem the the OOM
 killer is handling from time to time (I only noticed this yesterday).
 
 The other benefit to doing this is that is we were to ever want to CI
 images build with packages (this has come up in previous meetings) we
 wouldn't need to provide i386 packages just for CI, while the rest of
 the world uses the amd64.

+1 on all counts.

It's also important to note that we should actually have a whole new
rack of servers added to capacity soon (I think soon is about 6 months
so far, but we are at least committed to it). So this would be, at worst,
a temporary loss of 240 jobs per day.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [sdk] Proposal to achieve consistency in client side sorting

2015-01-07 Thread Steven Kaufer

Sean Dague s...@dague.net wrote on 01/07/2015 06:21:52 AM:

 From: Sean Dague s...@dague.net
 To: openstack-dev@lists.openstack.org
 Date: 01/07/2015 06:22 AM
 Subject: Re: [openstack-dev] [api] [sdk] Proposal to achieve
 consistency in client side sorting

 On 01/06/2015 09:37 PM, Rochelle Grober wrote:
  Steven,
 
 
 
  This sounds like a perfect place for a cross project spec.  It wouldn’t
  have to be a big one, but all the projects would have a chance to
review
  and the TC would oversee to ensure it gets proper review.
 
 
 
  TCms, am I on point here?

 Yes, this sounds reasonable. It would be a general CLI guidelines spec
 which we could expand over time to include common patterns that we
 prefer CLIs use when interfacing with their users.

-Sean

Thanks for the feedback.
Spec up for review at: https://review.openstack.org/#/c/145544/

Thanks,
Steven Kaufer


 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev