Re: [openstack-dev] [Nova][Neutron] Neutron + Nova + OVS security group fix

2014-03-26 Thread Akihiro Motoki
Hi Nachi and the teams,

(2014/03/26 9:57), Salvatore Orlando wrote:
 I hope we can sort this out on the mailing list IRC, without having to 
 schedule emergency meetings.

 Salvatore

 On 25 March 2014 22:58, Nachi Ueno na...@ntti3.com mailto:na...@ntti3.com 
 wrote:

 Hi Nova, Neturon Team

 I would like to discuss issue of Neutron + Nova + OVS security group fix.
 We have a discussion in IRC today, but the issue is complicated so we 
 will have
 a conf call tomorrow 17:00 UST (10AM PDT). #openstack-neutron

 (I'll put conf call information in IRC)


 thanks, but I'd prefer you discuss the matter on IRC.
 I won't be available at that time and having IRC logs on eavesdrop will allow 
 me to catch up without having to ask people or waiting for minutes on the 
 mailing list.

I can't join the meeting too. It is midnight.


 -- Please let me know if this time won't work with you.

 Bug Report
 https://bugs.launchpad.net/neutron/+bug/1297469

 Background of this issue:
 ML2 + OVSDriver + IptablesBasedFirewall combination is a default
 plugin setting in the Neutron.
 In this case, we need a special handing in VIF. Because OpenVSwitch
 don't support iptables, we are
 using linuxbride + openvswitch bridge. We are calling this as hybrid 
 driver.


 The hybrid solution in Neutron has been around for such a long time that I 
 would hardly call it a special handling.
 To summarize, the VIF is plugged into a linux bridge, which has another leg 
 plugged in the OVS integration bridge.

 On the other discussion, we generalized the Nova  side VIF plugging to
 the Libvirt GenericVIFDriver.
 The idea is let neturon tell the VIF plugging configration details to
 the GenericDriver, and GerericDriver
 takes care of it.


 The downside of the generic driver is that so far it's assuming local 
 configuration values are sufficient to correctly determine VIF plugging.
 The generic VIF driver will use the hybrid driver if get_firewall_required is 
 true. And this will happen if the firewall driver is anything different from 
 the NoOp driver.
 This was uncovered by a devstack commit (1143f7e). When I previously 
 discussed with the people involved this issue, I was under the impression 
 that the devstack patch introduced the problem.
 Apparently the Generic VIF driver is not taking at the moments hints from 
 neutron regarding the driver to use, and therefore, from what I gather, makes 
 a decision based on nova conf flags only.
 So a quick fix would be to tell the Generic VIF driver to always use hybrid 
 plugging when neutron is enabled (which can be gathered by nova conf flags).
 This will fix the issue for ML2, but will either break or insert an 
 unnecessary extra hop for other plugins.

When the generic VIF driver is introduced, OVS VIF driver and the hybrid VIF 
driver are
considered same as e as both are pluggged into OVS and the hybrid driver is 
implemeted
as a variation of OVS driver, but the thing is not so simple than the first 
thought.
The hybrid driver solution lives such a long time and IMO the hybrid VIF driver 
should
be considered as a different one from OVS VIF driver. I start to think 
VIF_TYPE_OVS_HYBRID
is a good way as Savaltore mentioned below.

Another point to be discussed is whether passing vif secuirty attributes work 
from now on.
Even when neutron security group is enabled, do we need to do some port 
security mechanism
(anti-spoofing, )  on nova-compute side (such as libvirt nwfilter) or not?



 Unfortunatly, HybridDriver is removed before GenericDriver is ready
 for security group.


 The drivers were marked for deprecation in Havana, and if we thought the 
 GenericDriver was not good for neutron security groups we had enough time to 
 scream.

 This makes ML2 + OVSDriver + IptablesBasedFirewall combination 
 unfunctional.
 We were working on realfix, but we can't make it until Icehouse
 release due to design discussions [1].

 # Even if neturon side patch isn't merged yet.

 So we are proposing a workaround fix to Nova side.
 In this fix, we are adding special version of the GenericVIFDriver
 which can work with the combination.
 There is two point on this new Driver.
 (1) It prevent set conf.filtername. Because we should use
 NoopFirewallDriver, we need conf.filtername should be None
 when we use it.
 (2) use plug_ovs_hybrid and unplug_ovs_hybrid by enforcing
 get_require_firewall as True.

IIUC, the original intention of get_firewall_required() is to control
whether nwfilter is enabled or not, not to control hybird plugging.
As a plan, get_firewall_required() is changed to look binding:attribute
(binding:capablity:port_filter or binding:vif_security:iptable_required
if I use the concept discussed so far).
What we need is a way to determine hybrid plugging is required or not.
Changing the meaning of get_firewall_required is not a good idea to me.


 

Re: [openstack-dev] [Neutron] Using Python-Neutronclient from Python - docstrings needed?

2014-03-26 Thread Rajdeep Dua
Thanks, will take a look



On Tuesday, March 25, 2014 11:33 PM, Collins, Sean 
sean_colli...@cable.comcast.com wrote:
 
On Fri, Mar 21, 2014 at 08:35:05PM EDT, Rajdeep Dua wrote:
 Sean,
 If you can point me to the project file in github which needs to be modified 
 , i will include these docs
 
 Thanks
 Rajdeep

I imagine inside the openstack-manuals git repo

https://github.com/openstack/openstack-manuals

Possibly inside the doc/user-guide tree.

Although others may have better suggestions.


-- 
Sean M. Collins___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][reviews] We're falling behind

2014-03-26 Thread Jan Provazník

On 03/25/2014 09:17 PM, Robert Collins wrote:

TripleO has just seen an influx of new contributors. \o/. Flip side -
we're now slipping on reviews /o\.

In the meeting today we had basically two answers: more cores, and
more work by cores.

We're slipping by 2 reviews a day, which given 16 cores is a small amount.

I'm going to propose some changes to core in the next couple of days -
I need to go and re-read a bunch of reviews first - but, right now we
don't have a hard lower bound on the number of reviews we request
cores commit to (on average).

We're seeing 39/day from the 16 cores - which isn't enough as we're
falling behind. Thats 2.5 or so. So - I'd like to ask all cores to
commit to doing 3 reviews a day, across all of tripleo (e.g. if your
favourite stuff is all reviewed, find two new things to review even if
outside comfort zone :)).

And we always need more cores - so if you're not a core, this proposal
implies that we'll be asking that you a) demonstrate you can sustain 3
reviews a day on average as part of stepping up, and b) be willing to
commit to that.

Obviously if we have enough cores we can lower the minimum commitment
- so I don't think this figure should be fixed in stone.

And now - time for a loose vote - who (who is a tripleo core today)
supports / disagrees with this proposal - lets get some consensus
here.

I'm in favour, obviously :), though it is hard to put reviews ahead of
direct itch scratching, its the only way to scale the project.

-Rob



+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-26 Thread Mathieu Rohon
Hi,

thanks for this very interesting use case!
May be you can still use VXLAN or GRE for tenant networks, to bypass
the 4k limit of vlans. then you would have to send packets to the vlan
tagged interface, with the tag assigned by the VDP protocol, and this
traffic would be encapsulated inside the segment to be carried inside
the network fabric. Of course you will have to take care about MTU.
The only thing you have to consider is to be sure that the default
route between VXLan endpoints go through your vlan tagged interface.



Best,
Mathieu

On Tue, Mar 25, 2014 at 12:13 AM, Padmanabhan Krishnan kpr...@yahoo.com wrote:
 Hello,
 I have a topology where my Openstack compute nodes are connected to the
 external switches. The fabric comprising of the switches support more than
 4K segments. So, i should be able to create more than 4K networks in
 Openstack. But, the VLAN to be used for communication with the switches is
 assigned by the switches using 802.1QBG (VDP) protocol. This can be thought
 of as a network overlay. The VM's sends .1q frames to the switches and the
 switches associate it to the segment (VNI in case of VXLAN).
 My question is:
 1. I cannot use a type driver of VLAN because of the 4K limitation. I cannot
 use a type driver of VXLAN or GRE because that may mean host based overlay.
 Is there an integrated type driver i can use like an external network for
 achieving the above?
 2. The Openstack module running in the compute should communicate with VDP
 module (lldpad) running there.
 In the computes, i see that ovs_neutron_agent.py is the one programming the
 flows. Here, for the new type driver, should i add a special case to
 provision_local_vlan() for communicating with lldpad for retrieving the
 provider VLAN? If there was a type driver component running in each
 computes, i would have added another one for my purpose. Since, the ML2
 architecture has its mechanism/type driver modules in the controller only, i
 can only make changes here.

 Please let me know if there's already an implementation for my above
 requirements. If not, should i create a blue-print?

 Thanks,
 Paddu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Neutron + Nova + OVS security group fix

2014-03-26 Thread Salvatore Orlando
The thread branched, and it's getting long.
I'm trying to summarize the discussion for other people to quickly catch up.

- The bug being targeted is https://bugs.launchpad.net/neutron/+bug/1297469
It has also been reported as
https://bugs.launchpad.net/neutron/+bug/1252620 and
as https://bugs.launchpad.net/nova/+bug/1248859
The fix for bug 1112912 had a fix for it.
- The problem is the generic VIF driver does not perform hybrid plugging
which is required by Neutron when running with ML2 plugin and OVS mech
driver
- The proposed patches (#21946 and #44596) are however very unlikely to
merge in icehouse
- An alternative approach has been proposed (
https://review.openstack.org/#/c/82904/); this will 'specialize' the
GenericVIF driver for use with neutron.
It is meant to be a temporary workaround pending permanent solution. It's
not adding conf variables, but has probably docImpact.
If that works for nova core, that works for me as well
- An idea regarding leveraging VIF_TYPE and fixing the issue has been
launched. This will constitute a fix which might be improved in the future,
and is still small and targeted. However we still need to look at the issue
Nachi's pointing out regarding the fact that a libvirt network filter name
should not be added in guest config.

Salvatore

On 26 March 2014 05:57, Akihiro Motoki mot...@da.jp.nec.com wrote:

 Hi Nachi and the teams,

 (2014/03/26 9:57), Salvatore Orlando wrote:
  I hope we can sort this out on the mailing list IRC, without having to
 schedule emergency meetings.
 
  Salvatore
 
  On 25 March 2014 22:58, Nachi Ueno na...@ntti3.com mailto:
 na...@ntti3.com wrote:
 
  Hi Nova, Neturon Team
 
  I would like to discuss issue of Neutron + Nova + OVS security group
 fix.
  We have a discussion in IRC today, but the issue is complicated so
 we will have
  a conf call tomorrow 17:00 UST (10AM PDT). #openstack-neutron
 
  (I'll put conf call information in IRC)
 
 
  thanks, but I'd prefer you discuss the matter on IRC.
  I won't be available at that time and having IRC logs on eavesdrop will
 allow me to catch up without having to ask people or waiting for minutes on
 the mailing list.

 I can't join the meeting too. It is midnight.

 
  -- Please let me know if this time won't work with you.
 
  Bug Report
  https://bugs.launchpad.net/neutron/+bug/1297469
 
  Background of this issue:
  ML2 + OVSDriver + IptablesBasedFirewall combination is a default
  plugin setting in the Neutron.
  In this case, we need a special handing in VIF. Because OpenVSwitch
  don't support iptables, we are
  using linuxbride + openvswitch bridge. We are calling this as hybrid
 driver.
 
 
  The hybrid solution in Neutron has been around for such a long time that
 I would hardly call it a special handling.
  To summarize, the VIF is plugged into a linux bridge, which has another
 leg plugged in the OVS integration bridge.
 
  On the other discussion, we generalized the Nova  side VIF plugging
 to
  the Libvirt GenericVIFDriver.
  The idea is let neturon tell the VIF plugging configration details to
  the GenericDriver, and GerericDriver
  takes care of it.
 
 
  The downside of the generic driver is that so far it's assuming local
 configuration values are sufficient to correctly determine VIF plugging.
  The generic VIF driver will use the hybrid driver if
 get_firewall_required is true. And this will happen if the firewall driver
 is anything different from the NoOp driver.
  This was uncovered by a devstack commit (1143f7e). When I previously
 discussed with the people involved this issue, I was under the impression
 that the devstack patch introduced the problem.
  Apparently the Generic VIF driver is not taking at the moments hints
 from neutron regarding the driver to use, and therefore, from what I
 gather, makes a decision based on nova conf flags only.
  So a quick fix would be to tell the Generic VIF driver to always use
 hybrid plugging when neutron is enabled (which can be gathered by nova conf
 flags).
  This will fix the issue for ML2, but will either break or insert an
 unnecessary extra hop for other plugins.

 When the generic VIF driver is introduced, OVS VIF driver and the hybrid
 VIF driver are
 considered same as e as both are pluggged into OVS and the hybrid driver
 is implemeted
 as a variation of OVS driver, but the thing is not so simple than the
 first thought.
 The hybrid driver solution lives such a long time and IMO the hybrid VIF
 driver should
 be considered as a different one from OVS VIF driver. I start to think
 VIF_TYPE_OVS_HYBRID
 is a good way as Savaltore mentioned below.

 Another point to be discussed is whether passing vif secuirty attributes
 work from now on.
 Even when neutron security group is enabled, do we need to do some port
 security mechanism
 (anti-spoofing, )  on nova-compute side (such as libvirt nwfilter) or
 not?

 
 
  Unfortunatly, HybridDriver is 

[openstack-dev] [Openstack] [Nova] Havana virtio_blk | kvm, kernel panic on VM boot

2014-03-26 Thread saurabh agarwal
I have compiled a new linux kernel with CONFIG_VIRTIO_BLK=y. But it doesn't
boot and kernel panics. To kernel command line i tried passing
root=/dev/vda and root=/dev/vda1 but same kernel panic comes every time.
VIRTIO_NET was working fine when VIRTIO_BLK was not enabled and VM booted
up fine. But with virtio-blk i see the below kernel panic. Can someone
please suggest what could be going wrong?

VFS: Cannot open root device vda or unknown-block(253,0)^M
Please append a correct root= boot option; here are the available
partitions:^M
fd00 8388608 vda  driver: virtio_blk^M
  fd01 7340032 vda1
----^M

  fd02  512000 vda2 ----^M
  fd03  535552 vda3 ----^M
Kernel panic - not syncing: VFS: Unable to mount root fs on
unknown-block(253,0)^M

Regards,
Saurabh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-26 Thread Dmitry
Hi Thomas,
Can you share some documentation of what you're doing right now with
TOSCA-compliant layer?
We would like to join to this effort.

Thanks,
Dmitry


On Wed, Mar 26, 2014 at 10:38 AM, Thomas Spatzier 
thomas.spatz...@de.ibm.com wrote:

 Excerpt from Zane Bitter's message on 26/03/2014 02:26:42:

  From: Zane Bitter zbit...@redhat.com
  To: openstack-dev@lists.openstack.org
  Date: 26/03/2014 02:27
  Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
 

 snip

   Cloud administrators are usually technical guys that are capable of
   learning HOT and writing YAML templates. They know exact configuration
   of their cloud (what services are available, what is the version of
   OpenStack cloud is running) and generally understands how OpenStack
   works. They also know about software they intent to install. If such
 guy
   wants to install Drupal he knows exactly that he needs HOT template
   describing Fedora VM with Apache + PHP + MySQL + Drupal itself. It is
   not a problem for him to write such HOT template.
 
  I'm aware that TOSCA has these types of constraints, and in fact I
  suggested to the TOSCA TC that maybe this is where we should draw the
  line between Heat and some TOSCA-compatible service: HOT should be a
  concrete description of exactly what you're going to get, whereas some
  other service (in this case Murano) would act as the constraints solver.
  e.g. something like an image name would not be hardcoded in a Murano
  template, you have some constraints about which operating system and
  what versions should be allowed, and it would pick one and pass it to
  Heat. So I am interested in this approach.

 I can just support Zane's statements above. We are working on exactly those
 issues in the TOSCA YAML definition, so it would be ideal to just
 collaborate on this. As Zane said, there currently is a thinking that some
 TOSCA-compliant layer could be a (maybe thin) layer above Heat that
 resolves a more abstract (thus more portable) template into something
 concrete, executable. We have started developing code (early versions are
 on stackforge already) to find out the details.

 
  The worst outcome here would be to end up with something that was
  equivalent to TOSCA but not easily translatable to the TOSCA Simple
  Profile YAML format (currently a Working Draft). Where 'easily
  translatable' preferably means 'by just changing some names'. I can't
  comment on whether this is the case as things stand.
 

 The TOSCA Simple Profile in YAML is a working draft at the moment, so we
 are pretty much open for any input. So let's see to get the right folks
 together and get it right. Since the Murano folks have indicated before
 that they are evaluating the option to join the OASIS TC, I am optimistic
 that we can get the streams together. Having implementation work going on
 here in this community in parallel to the standards work, and both streams
 inspiring each other, will be fun :-)


 Regards,
 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-26 Thread Thierry Carrez
Russell Bryant wrote:
 [...]
 First, it seems there isn't a common use of deprecated.  To me,
 marking something deprecated means that the deprecated feature:
 
  - has been completely replaced by something else
 
  - end users / deployers should take action to migrate to the
new thing immediately.
 
  - The project has provided a documented migration path
 
  - the old thing will be removed at a specific date/release

Agreed, IMHO we need to reserve the use the deprecated terminology for
the idea of moving end users, deployers, external client library
developers (people outside of OpenStack direct reach) off a given API
version. Deprecation is about giving them a fair heads-up about
something that is about to be removed, so that they are encouraged to
move off it. It needs to be discussed and announced with the user
community, and come with a precise plan.

Internal consumption of APIs between OpenStack projects is a different
beast: (1) it's under our control and (2) we really can't remove an API
until all our internal pieces have migrated off it.

So I wouldn't use deprecation warnings to encourage other OpenStack
projects to move off an API. They can't come with a precise date since
if projects don't comply with this suggestion we just can't remove
that API support. I would therefore go this way:

1. API vN is stable and supported
2. API vN+1 is being developed and experimental
3. API vN+1 is marked stable and supported
4. Engage with other consuming OpenStack projects to migrate to vN+1
5. Migration is completed
6. Deprecation plan (and removal date) is discussed with stakeholders
7. Deprecation plan (and removal date) is decided and announced
8. Deprecation messages are added to code for API vN users
9. At removal date, API vN is removed

Keystone is at step 4. It shouldn't use deprecation terminology before
step 6.

If step 4 is blocked, project should first raise the issue at
cross-project meetings, and if all else fails at the TC level.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] requirements repository core reviewer updates

2014-03-26 Thread Thierry Carrez
Doug Hellmann wrote:
 In the last project meeting, we discussed updating the list of core
 reviewers on the global requirements project. The review stats for the
 last 90 days on the project show that several current core reviewers
 haven't been active, so as a first step before adding new cores I
 propose that we make sure everyone who is currently core is still
 interested in participating.
 
 The current list of cores is visible in
 gerrit: https://review.openstack.org/#/admin/groups/131,members
 
 I generated a set of review status for the last 90 days
 using git://git.openstack.org/openstack-infra/reviewstats
 http://git.openstack.org/openstack-infra/reviewstats and posted the
 results in http://paste.openstack.org/show/74046/.
 
 We had a few reviewers with 0-1 reviews in the last 90 days:
 
 Dan Prince
 Dave Walker
 Gabriel Hurley
 Joe Heck
 Eric Windisch
 
 If any of you wish to remain on the core reviewer list during Juno,
 speak up. Otherwise we'll purge the list around the time of the
 dependency freeze (Thierry, let me know if you had different timing in
 mind for that).

OK, list purged.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] xstatic for removing bundled js libs

2014-03-26 Thread Maxime Vidori
Hi Radomir,

I quickly looked at xstatic and I have the impression that you have to handle 
manually the retrievement of the javascript library files and the version of 
the scripts. What do you think of wrapping bower in a script in order to 
dynamically generate packages with the right versions? 

- Original Message -
From: Radomir Dopieralski openst...@sheep.art.pl
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, March 26, 2014 9:25:26 AM
Subject: [openstack-dev] [horizon] xstatic for removing bundled js libs

Hello,

Before we split Horizon into two parts, we need to deal with a couple of
related cleanup tasks. One of them is getting rid of the bundled
JavaScript libraries that we are using. They are currently just included
in the Horizon's source tree. Ideally, we would like to have them
installed as dependencies. There is a blueprint for that at:

https://blueprints.launchpad.net/horizon/+spec/remove-javascript-bundling

We have several options for actually doing this. One of them is
selecting an appropriate django-* library, where available, and using
whatever additional API and code the author of the library made
available. We need to choose carefully, and every library has to be
approached separately for this.

I propose a more general solution of using xstatic-* Python packages,
which contain basically just the files that we want to bundle, plus some
small amount of metadata. All of the JavaScript (and any static files,
really, including styles, fonts and icons) would be then handled the
same way, by just adding the relevant package to the requirements and to
the settings.py file. Packaging the libraries that are missing is very
easy (as there is no extra code to write), and we get to share the
effort with other projects that use xstatic.

Anyways, I made a proof of concept patch that demonstrates this approach
for the jQuery library. Obviously it fails Jenkins tests, as the xstatic
and xstatic-jquery packages are not included in the global requirements,
but it shows how little effort is involved. You can see the patch at:

https://review.openstack.org/#/c/82516/

Any feedback and suggestions appreciated.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] xstatic for removing bundled js libs

2014-03-26 Thread Radomir Dopieralski
On 26/03/14 11:43, Maxime Vidori wrote:
[...]
 I propose a more general solution of using xstatic-* Python packages,
 which contain basically just the files that we want to bundle, plus some
 small amount of metadata. All of the JavaScript (and any static files,
 really, including styles, fonts and icons) would be then handled the
 same way, by just adding the relevant package to the requirements and to
 the settings.py file. Packaging the libraries that are missing is very
 easy (as there is no extra code to write), and we get to share the
 effort with other projects that use xstatic.
[...]

 Hi Radomir,
 
 I quickly looked at xstatic and I have the impression that you have to handle
 manually the retrievement of the javascript library files and the version of
 the scripts. What do you think of wrapping bower in a script in order to
 dynamically generate packages with the right versions?

How the actual xstatic-* packages are created and maintained is up to
the packager. They are not part of Horizon, so you can use any tools
you wish to create and update them.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread John Garbutt
Sounds like an extra weighter to try and balance load between your two AZs
might be a nicer way to go.

The easiest way might be via cells, one for each AZ . But not sure we
merged that support yet. But there are patches for that.

John
On 25 Mar 2014 20:53, Sangeeta Singh sin...@yahoo-inc.com wrote:

  Hi,

  The availability Zones filter states that theoretically a compute node
 can be part of multiple availability zones. I have a requirement where I
 need to make a compute node part to 2 AZ. When I try to create a host
 aggregates with AZ I can not add the node in two host aggregates that have
 AZ defined. However if I create a host aggregate without associating an AZ
 then I can add the compute nodes to it. After doing that I can update the
 host-aggregate an associate an AZ. This looks like a bug.

  I can see the compute node to be listed in the 2 AZ with the
 availability-zone-list command.

  The problem that I have is that I can still not boot a VM on the compute
 node when I do not specify the AZ in the command though I have set the
 default availability zone and the default schedule zone in nova.conf.

  I get the error ERROR: The requested availability zone is not available

  What I am  trying to achieve is have two AZ that the user can select
 during the boot but then have a default AZ which has the HV from both AZ1
 AND AZ2 so that when the user does not specify any AZ in the boot command I
 scatter my VM on both the AZ in a balanced way.

  Any pointers.

  Thanks,
 Sangeeta

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-26 Thread Jiří Stránský

(Removing [Heat] from the subject.)

So here are the steps i think are necessary to get the PKI setup done 
and safely passed through Jenkins. If anyone thinks something is 
redundant or missing, please shout:


1. Patch to os-cloud-config:

  * Generation of keys and certs for cases user doesn't want to
specify their own - mainly PoC deployments. (Generation happens
in-memory, which is better for Tuskar than having to write
keys/certs to disk - we might have different sets for different
overclouds.)

  * Implement also a function that will write the keys/certs to a
specified location on disk (in-memory generation is not well
suited for use within Devtest).

2. Patch to T-I-E:

  * os-cloud-config image element.

3. Patch to tripleo-incubator (dependent on patches 1 and 2):

  * Generate keys using os-cloud-config and pass them into heat-create
if the T-H-T supports that (this is to make sure the next T-H-T
patch passes). Keep doing the current init-keystone anyway.

4. Patch to T-H-T (dependent on patch 3):

  * Accept 3 new parameters for controller nodes: KeystoneCACert,
KeystoneSigningKey, KeystoneSigningCert. Default them to empty
string so that they are not required (otherwise we'd have to
implement logic forking also for Tuskar, because it's
chicken-and-egg there too).

5. Patch to tuskar (dependent on patch 4):

  * Use os-cloud-config to generate keys and certs if user didn't
specify their own, pass new parameters to T-H-T.

6. Patch to T-I-E (dependent on patch 5):

  * Add the certs and signing key to keystone's os-apply-config
templates. Change key location to /etc instead of
/mnt/state/etc. Devtest should keep working because calling
`keystone-manage pki_setup` on already initialized system does not
have significant effect. It will keep generating a useless CA key,
but that will stop with patch 7.

7. Cleanup patch to tripleo-incubator (dependent on patch 6):

  * Remove conditional on passing the 3 new parameters only if
supported, pass them always.

  * Remove call to pki_setup.


Regarding the cloud initialization as a whole, on monday i sent a patch 
for creating users, roles etc. [1]. The parts still missing are endpoint 
registration [2,3] and neutron setup [4].


If anyone is willing to spare some cycles on endpoint registration or 
neturon setup or make the image element for os-cloud-config (patch no. 2 
in above list), it would be great, as we'd like to have this finished as 
soon as possible.



Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/
[2] 
https://github.com/openstack/tripleo-incubator/blob/4e2e8de41ba91a5699ea4eb9091f6ef4c95cf0ce/scripts/init-keystone#L111-L114
[3] 
https://github.com/openstack/tripleo-incubator/blob/4e2e8de41ba91a5699ea4eb9091f6ef4c95cf0ce/scripts/setup-endpoints
[4] 
https://github.com/openstack/tripleo-incubator/blob/4e2e8de41ba91a5699ea4eb9091f6ef4c95cf0ce/scripts/setup-neutron


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [depfreeze][horizon] Exception for lesscpy=0.10.1

2014-03-26 Thread Sascha Peilicke
Hi,

there's been a review up for some time [0] that wants to raise the version of 
lesscpy to 0.10.1. It's specific to horizon and contains some important fixes 
that we'll likely want to include. So I'd like to ask for an exception for 
this one.

[0] https://review.openstack.org/#/c/70619/
-- 
Viele Grüße,
Sascha Peilicke

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Baldassin, Santiago B
I would say that the requirement is not valid. A host aggregate con only have 
one availability zone so what you actually can have is a compute node that's 
part of 2 host aggregates, which actually have the same availability zone.

In the scenario you mentioned below where you create the aggregates without 
associating the availability zone, after updating the aggregates with the 
zones, the hosts still share the same availability zone right?

From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: Wednesday, March 26, 2014 8:47 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host 
aggregates..


Sounds like an extra weighter to try and balance load between your two AZs 
might be a nicer way to go.

The easiest way might be via cells, one for each AZ . But not sure we merged 
that support yet. But there are patches for that.

John
On 25 Mar 2014 20:53, Sangeeta Singh 
sin...@yahoo-inc.commailto:sin...@yahoo-inc.com wrote:
Hi,

The availability Zones filter states that theoretically a compute node can be 
part of multiple availability zones. I have a requirement where I need to make 
a compute node part to 2 AZ. When I try to create a host aggregates with AZ I 
can not add the node in two host aggregates that have AZ defined. However if I 
create a host aggregate without associating an AZ then I can add the compute 
nodes to it. After doing that I can update the host-aggregate an associate an 
AZ. This looks like a bug.

I can see the compute node to be listed in the 2 AZ with the 
availability-zone-list command.

The problem that I have is that I can still not boot a VM on the compute node 
when I do not specify the AZ in the command though I have set the default 
availability zone and the default schedule zone in nova.conf.

I get the error ERROR: The requested availability zone is not available

What I am  trying to achieve is have two AZ that the user can select during the 
boot but then have a default AZ which has the HV from both AZ1 AND AZ2 so that 
when the user does not specify any AZ in the boot command I scatter my VM on 
both the AZ in a balanced way.

Any pointers.

Thanks,
Sangeeta

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze][horizon] Exception for lesscpy=0.10.1

2014-03-26 Thread Adam Nelson
I'm not sure why there's so much resistance to Python package version
minimums being increased.  Everybody should be using virtualenvs anyway so
it's not like there's some sort of need to support old libraries because
that's what's on deployed OSes.

I understand supporting old kernels, system libraries, etc... but just
don't get why Python libs should be held back.

In addition, deepfreeze blocking version upgrades is really an old
fashioned way of thinking.

--
Kili - Cloud for Africa: kili.io
Musings: twitter.com/varud https://twitter.com/varud
More Musings: varud.com
About Adam: www.linkedin.com/in/adamcnelson


On Wed, Mar 26, 2014 at 3:14 PM, Sascha Peilicke sasc...@mailbox.orgwrote:

 Hi,

 there's been a review up for some time [0] that wants to raise the version
 of
 lesscpy to 0.10.1. It's specific to horizon and contains some important
 fixes
 that we'll likely want to include. So I'd like to ask for an exception for
 this one.

 [0] https://review.openstack.org/#/c/70619/
 --
 Viele Grüße,
 Sascha Peilicke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-26 Thread Shlomi Sasson
Hi Ramy,

There is quite a bit of difference between FC and iSCSI (e.g. iqn vs WWN ..)
While iSER is just an alternative iSCSI transport, and use the exact same tools 
on the initiator and target like iSCSI/TCP
Most iSER capable iSCSI targets don't even have a separate configuration for 
TCP or RDMA, and would accept both transports options on a given logical iSCSI 
target (why we don't need a different plug-ins for targets with exception of 
STGT)

On the initiator side the only difference between TCP and RDMA is in the 
interface flag (--interface=[iface])
e.g. iscsiadm -m discoverydb -t st -p ip:port -I iser --discover

so we don't need a full new class to propagate a simple flag, and rather make 
this a simple parameter
we also though of modifying the operation that default_rdma would mean try 
RDMA and if it fails fall back to TCP, to simplify operations.

Shlomi

From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
Sent: Tuesday, March 25, 2014 17:55
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support 
other iSCSI transports besides TCP

Hi Shlomi,

Another solution to consider is to create a subclass per transport (iSCSI, 
iSER) which reference the same shared common code.
This is the solution used for the 3PAR iSCSI  FC transports. See these for 
reference:
cinder/volume/drivers/san/hp/hp_3par_common.py
cinder/volume/drivers/san/hp/hp_3par_fc.py
cinder/volume/drivers/san/hp/hp_3par_iscsi.py

Hope this helps.

Ramy

From: Shlomi Sasson [mailto:shlo...@mellanox.com]
Sent: Tuesday, March 25, 2014 8:07 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other 
iSCSI transports besides TCP

Hi,

I want to share with the community the following challenge:
Currently, Vendors who have their iSCSI driver, and want to add RDMA transport 
(iSER), cannot leverage their existing plug-in which inherit from iSCSI
And must modify their driver or create an additional plug-in driver which 
inherit from iSER, and copy the exact same code.

Instead I believe a simpler approach is to add a new attribute to ISCSIDriver 
to support other iSCSI transports besides TCP, which will allow minimal changes 
to support iSER.
The existing ISERDriver code will be removed, this will eliminate significant 
code and class duplication, and will work with all the iSCSI vendors who 
supports both TCP and RDMA without the need to modify their plug-in drivers.

To achieve that both cinder  nova requires slight changes:
For cinder, I wish to add a  parameter called transport (default to iscsi) to 
distinguish between the transports and use the existing iscsi_ip_address 
parameter for any transport type connection.
For nova, I wish to add a parameter called default_rdma (default to false) to 
enable initiator side.
The outcome will avoid code duplication and the need to add more classes.

I am not sure what will be the right approach to handle this, I already have 
the code, should I open a bug or blueprint to track this issue?

Best Regards,
Shlomi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze][horizon] Exception for lesscpy=0.10.1

2014-03-26 Thread Chuck Short
Hi,

Could you possibly add whats new in the changelog as well?

Thanks
chuck


On Wed, Mar 26, 2014 at 8:14 AM, Sascha Peilicke sasc...@mailbox.orgwrote:

 Hi,

 there's been a review up for some time [0] that wants to raise the version
 of
 lesscpy to 0.10.1. It's specific to horizon and contains some important
 fixes
 that we'll likely want to include. So I'd like to ask for an exception for
 this one.

 [0] https://review.openstack.org/#/c/70619/
 --
 Viele Grüße,
 Sascha Peilicke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-26 Thread Russell Bryant
On 03/25/2014 10:01 PM, Dolph Mathews wrote:
 Given that intention, I believe the proper thing to do is to actually
 leave the API marked as fully supported / stable.  Keystone should be
 working with other OpenStack projects to migrate them to v3.  Once that
 is complete, deprecation can be re-visited.
 
 
 Happy to!
 
 Revert deprecation of the v2 API: https://review.openstack.org/#/c/82963/
 
 Although I'd prefer to apply this patch directly to milestone-proposed,
 so we can continue into Juno with the deprecation in master.

I think it should be reverted completely.  Otherwise, the problem hasn't
been solved.  Some deployments chase trunk and we'd still have this
confusion in the dev community, as well.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-26 Thread Russell Bryant
On 03/26/2014 06:30 AM, Thierry Carrez wrote:
 Russell Bryant wrote:
 [...]
 First, it seems there isn't a common use of deprecated.  To me,
 marking something deprecated means that the deprecated feature:

  - has been completely replaced by something else

  - end users / deployers should take action to migrate to the
new thing immediately.

  - The project has provided a documented migration path

  - the old thing will be removed at a specific date/release
 
 Agreed, IMHO we need to reserve the use the deprecated terminology for
 the idea of moving end users, deployers, external client library
 developers (people outside of OpenStack direct reach) off a given API
 version. Deprecation is about giving them a fair heads-up about
 something that is about to be removed, so that they are encouraged to
 move off it. It needs to be discussed and announced with the user
 community, and come with a precise plan.
 
 Internal consumption of APIs between OpenStack projects is a different
 beast: (1) it's under our control and (2) we really can't remove an API
 until all our internal pieces have migrated off it.
 
 So I wouldn't use deprecation warnings to encourage other OpenStack
 projects to move off an API. They can't come with a precise date since
 if projects don't comply with this suggestion we just can't remove
 that API support. I would therefore go this way:
 
 1. API vN is stable and supported
 2. API vN+1 is being developed and experimental
 3. API vN+1 is marked stable and supported
 4. Engage with other consuming OpenStack projects to migrate to vN+1
 5. Migration is completed
 6. Deprecation plan (and removal date) is discussed with stakeholders
 7. Deprecation plan (and removal date) is decided and announced
 8. Deprecation messages are added to code for API vN users
 9. At removal date, API vN is removed
 
 Keystone is at step 4. It shouldn't use deprecation terminology before
 step 6.
 
 If step 4 is blocked, project should first raise the issue at
 cross-project meetings, and if all else fails at the TC level.
 

I think you did a very nice job of capturing the ideal steps here.
Totally agreed.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze][horizon] Exception for lesscpy=0.10.1

2014-03-26 Thread Sean Dague
It's not expected that you are installing all of openstack into venvs,
it's expected that it works at a system level.

That's always been a design point given that Linux distributions
actually want to ship all this stuff.

-Sean

On 03/26/2014 08:28 AM, Adam Nelson wrote:
 I'm not sure why there's so much resistance to Python package version
 minimums being increased.  Everybody should be using virtualenvs anyway
 so it's not like there's some sort of need to support old libraries
 because that's what's on deployed OSes.
 
 I understand supporting old kernels, system libraries, etc... but just
 don't get why Python libs should be held back.
 
 In addition, deepfreeze blocking version upgrades is really an old
 fashioned way of thinking.  
 
 --
 Kili - Cloud for Africa: kili.io http://kili.io/
 Musings: twitter.com/varud https://twitter.com/varud
 More Musings: varud.com http://varud.com/
 About Adam: www.linkedin.com/in/adamcnelson
 https://www.linkedin.com/in/adamcnelson
 
 
 On Wed, Mar 26, 2014 at 3:14 PM, Sascha Peilicke sasc...@mailbox.org
 mailto:sasc...@mailbox.org wrote:
 
 Hi,
 
 there's been a review up for some time [0] that wants to raise the
 version of
 lesscpy to 0.10.1. It's specific to horizon and contains some
 important fixes
 that we'll likely want to include. So I'd like to ask for an
 exception for
 this one.
 
 [0] https://review.openstack.org/#/c/70619/
 --
 Viele Grüße,
 Sascha Peilicke
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] proxying SSL traffic for API requests

2014-03-26 Thread stuart . mclaren

All,

I know there's a preference for using a proxy to terminate
SSL connections rather than using the native python code.

There's a good write up of configuring the various proxies here:

http://docs.openstack.org/security-guide/content/ch020_ssl-everywhere.html

If we're not using native python SSL termination in TripleO we'll
need to pick which one of these would be a reasonable choice for
initial https support.

Pound may be a good choice -- its lightweight (6,000 lines of C),
easy to configure and gives good control over the SSL connections (ciphers etc).
Plus, we've experience with pushing large (GB) requests through it.

I'm interested if others have a strong preference for one of the other
options (stud, nginx, apache) and if so, what are the reasons you feel it
would make a better choice for a first implementation.

Thanks,

-Stuart

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests

2014-03-26 Thread stuart . mclaren

Just spotted the openstack-ssl element which uses 'stunnel'...


On Wed, 26 Mar 2014, stuart.mcla...@hp.com wrote:


All,

I know there's a preference for using a proxy to terminate
SSL connections rather than using the native python code.

There's a good write up of configuring the various proxies here:

http://docs.openstack.org/security-guide/content/ch020_ssl-everywhere.html

If we're not using native python SSL termination in TripleO we'll
need to pick which one of these would be a reasonable choice for
initial https support.

Pound may be a good choice -- its lightweight (6,000 lines of C),
easy to configure and gives good control over the SSL connections (ciphers 
etc).

Plus, we've experience with pushing large (GB) requests through it.

I'm interested if others have a strong preference for one of the other
options (stud, nginx, apache) and if so, what are the reasons you feel it
would make a better choice for a first implementation.

Thanks,

-Stuart



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Neutron Routers and LLAs

2014-03-26 Thread Robert Li (baoli)
Hi Sean,

Unless I have missed something, this is my thinking:
  -- I understand that the goal is to allow RAs from designated sources
only.
  -- initially, xuhanp posted a diff for
https://review.openstack.org/#/c/72252. And my comment was that subnet
that was created with gateway ip not on the same subnet can't be added
into the neutron router.
  -- as a result, https://review.openstack.org/#/c/76125/ was posted to
address that issue. With that diff, LLA would be allowed. But a
consequence of that is a gateway port would end up having two LLAs: one
that is automatically generated, the other from the subnet gateway IP.
  -- with xuhanp's new diff for https://review.openstack.org/#/c/72252, if
openstack native RA is enabled, then the automatically generated LLA will
be used; and if it's not enabled, it will use the external gateway's LLA.
And the diff seems to indicate this LLA comes from the subnet's gateway
IP.
  -- Therefore, the change of https://review.openstack.org/#/c/76125/
seems to be able to add the gateway IP as an external gateway.
  -- Thus, my question is: should such a subnet be allowed to add in a
router? And if it should, what would the semantics be? If not, proper
error should be provided to the user. I'm also trying to figure out the
reason that such a subnet needs to be created in neutron (other than
creating L2 ports for VMs).

-- Another thought is that if the RA is coming from the provider net, then
the provider net should have installed mechanisms to prevent rogue RAs
from entering the network. There are a few RFCs that address the rogue RA
issue. 

see inline as well.

I hope that I didn't confuse you guys.

Thanks,
Robert


On 3/25/14 2:18 PM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

During the review[0] of the patch that only allows RAs from known
addresses, Robert Li brought up a bug in Neutron, where a
IPv6 Subnet could be created, with a link local address for the gateway,
that would fail to create the Neutron router because the IP address that
the router's port would be assigned, was a link local
address that was not on the subnet.

This may or may have been run before force_gateway_on_subnet flag was
introduced. Robert - if you can give us what version of Neutron you were
running that would be helpful.

[Robert] I'm using the latest



Here's the full text of what Robert posted in the review, which shows
the bug, which was later filed[1].

 This is what I've tried, creating a subnet with a LLA gateway address:
 
 neutron subnet-create --ip-version 6 --name myipv6sub --gateway
fe80::2001:1 mynet :::/64

 Created a new subnet:
 
+--+
+
 | Field | Value |
 
+--+
+
 | allocation_pools | {start: :::1, end:
::::::fffe} | | cidr | :::/64 | |
dns_nameservers | | | enable_dhcp | True | | gateway_ip | fe80::2001:1
| | host_routes | | | id | a1513aa7-fb19-4b87-9ce6-25fd238ce2fb | |
ip_version | 6 | | name | myipv6sub | | network_id |
9c25c905-da45-4f97-b394-7299ec586cff | | tenant_id |
fa96d90f267b4a93a5198c46fc13abd9 |
 
+--+
+
 
 openstack@devstack-16:~/devstack$ neutron router-list

 
+--+-+--
---+
 | id | name | external_gateway_info
 | 
+--+-+--
---+
 | 7cf084b4-fafd-4da2-9b15-0d25a3e27e67 | router1 | {network_id:
02673c3c-35c3-40a9-a5c2-9e5c093aca48, enable_snat: true}
 | 
 
+--+-+--
---+

 openstack@devstack-16:~/devstack$ neutron router-interface-add
7cf084b4-fafd-4da2-9b15-0d25a3e27e67 myipv6sub

 400-{u'NeutronError': {u'message': u'Invalid input for operation: IP
address fe80::2001:1 is not a valid IP for the defined subnet.',
u'type': u'InvalidInput', u'detail': u''}}


During last week's meeting, we had a bit of confusion near the end of the
meeting[2] about the following bug, and the fix[3].

If I am not mistaken - the fix is so that when you create a v6 Subnet
with a link local address, then create a Neutron router to serve as the
gateway for that subnet - the operation will successfully complete and a
router will be created.

We may need to take a look at the code that create a router - to ensure
that only one gateway port is created, and that the link local address
from the subnet's 'gateway' attribute is used as the address.

[Robert] We are discussing what's going to happen when such a subnet is
added into a router. The neutron router may have already existed.



This is at least my understanding of the problem as it 

Re: [openstack-dev] [TripleO][reviews] We're falling behind

2014-03-26 Thread mar...@redhat.com
On 26/03/14 11:50, Ladislav Smola wrote:
 On 03/25/2014 09:17 PM, Robert Collins wrote:
 TripleO has just seen an influx of new contributors. \o/. Flip side -
 we're now slipping on reviews /o\.

 In the meeting today we had basically two answers: more cores, and
 more work by cores.

 We're slipping by 2 reviews a day, which given 16 cores is a small
 amount.

 I'm going to propose some changes to core in the next couple of days -
 I need to go and re-read a bunch of reviews first - but, right now we
 don't have a hard lower bound on the number of reviews we request
 cores commit to (on average).

 We're seeing 39/day from the 16 cores - which isn't enough as we're
 falling behind. Thats 2.5 or so. So - I'd like to ask all cores to
 commit to doing 3 reviews a day, across all of tripleo (e.g. if your
 favourite stuff is all reviewed, find two new things to review even if
 outside comfort zone :)).

 And we always need more cores - so if you're not a core, this proposal
 implies that we'll be asking that you a) demonstrate you can sustain 3
 reviews a day on average as part of stepping up, and b) be willing to
 commit to that.

 Obviously if we have enough cores we can lower the minimum commitment
 - so I don't think this figure should be fixed in stone.

 And now - time for a loose vote - who (who is a tripleo core today)
 supports / disagrees with this proposal - lets get some consensus
 here.

 I'm in favour, obviously :), though it is hard to put reviews ahead of
 direct itch scratching, its the only way to scale the project.

 -Rob

 +1
 I'll do my best

ditto

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests

2014-03-26 Thread Chris Jones
Hi

We don't have a strong attachment to stunnel though, I quickly dropped it in 
front of our CI/CD undercloud and Rob wrote the element so we could repeat the 
deployment.

In the fullness of time I would expect there to exist elements for several SSL 
terminators, but we shouldn't necessarily stick with stunnel because it 
happened to be the one I was most familiar with :)

I would think that an httpd would be a good option to go with as the default, 
because I tend to think that we'll need an httpd running/managing the python 
code by default.

Cheers,
--
Chris Jones

 On 26 Mar 2014, at 13:49, stuart.mcla...@hp.com wrote:
 
 Just spotted the openstack-ssl element which uses 'stunnel'...
 
 
 On Wed, 26 Mar 2014, stuart.mcla...@hp.com wrote:
 
 All,
 
 I know there's a preference for using a proxy to terminate
 SSL connections rather than using the native python code.
 
 There's a good write up of configuring the various proxies here:
 
 http://docs.openstack.org/security-guide/content/ch020_ssl-everywhere.html
 
 If we're not using native python SSL termination in TripleO we'll
 need to pick which one of these would be a reasonable choice for
 initial https support.
 
 Pound may be a good choice -- its lightweight (6,000 lines of C),
 easy to configure and gives good control over the SSL connections (ciphers 
 etc).
 Plus, we've experience with pushing large (GB) requests through it.
 
 I'm interested if others have a strong preference for one of the other
 options (stud, nginx, apache) and if so, what are the reasons you feel it
 would make a better choice for a first implementation.
 
 Thanks,
 
 -Stuart
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][reviews] We're falling behind

2014-03-26 Thread Derek Higgins
On 25/03/14 20:17, Robert Collins wrote:
 TripleO has just seen an influx of new contributors. \o/. Flip side -
 we're now slipping on reviews /o\.
 
 In the meeting today we had basically two answers: more cores, and
 more work by cores.
 
 We're slipping by 2 reviews a day, which given 16 cores is a small amount.
 
 I'm going to propose some changes to core in the next couple of days -
 I need to go and re-read a bunch of reviews first - but, right now we
 don't have a hard lower bound on the number of reviews we request
 cores commit to (on average).
 
 We're seeing 39/day from the 16 cores - which isn't enough as we're
 falling behind. Thats 2.5 or so. So - I'd like to ask all cores to
 commit to doing 3 reviews a day, across all of tripleo (e.g. if your
 favourite stuff is all reviewed, find two new things to review even if
 outside comfort zone :)).
 
 And we always need more cores - so if you're not a core, this proposal
 implies that we'll be asking that you a) demonstrate you can sustain 3
 reviews a day on average as part of stepping up, and b) be willing to
 commit to that.
 
 Obviously if we have enough cores we can lower the minimum commitment
 - so I don't think this figure should be fixed in stone.
 
 And now - time for a loose vote - who (who is a tripleo core today)
 supports / disagrees with this proposal - lets get some consensus
 here.
Sounds reasonable to me, +1

 
 I'm in favour, obviously :), though it is hard to put reviews ahead of
 direct itch scratching, its the only way to scale the project.
 
 -Rob
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo library release plan for juno

2014-03-26 Thread Doug Hellmann
Over the course of the Icehouse release, the Oslo team has invested a lot
of time in creating processes and tools to prepare for releasing code from
the incubator as a set of new libraries. The plan we have put together for
creating 9 new libraries during the Juno release is available for review at
https://wiki.openstack.org/wiki/Oslo/JunoGraduationPlans

I have proposed a cross-project summit session to iron out some details,
but please don't wait until then if you have comments or questions about
the plan.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze][horizon] Exception for lesscpy=0.10.1

2014-03-26 Thread Julie Pichon
On 26/03/14 12:14, Sascha Peilicke wrote:
 Hi,
 
 there's been a review up for some time [0] that wants to raise the version of 
 lesscpy to 0.10.1. It's specific to horizon and contains some important fixes 
 that we'll likely want to include. So I'd like to ask for an exception for 
 this one.
 
 [0] https://review.openstack.org/#/c/70619/
 

The review comments indicate this is needed for Bootstrap 3, which was
deferred to Juno. Is it needed for something else in Icehouse? Is
something broken without it? If not, it seems to me it's probably ok to
merge only at the beginning of Juno.

Otherwise, apparently I've been running with 0.10.1 in some of my VMs
for a few weeks and didn't notice any problems, FWIW. It also has the
advantage of being a proper release as opposed to beta.

Thanks,

Julie


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

2014-03-26 Thread Solly Ross
Hi,
I currently have a patch up for review 
(https://review.openstack.org/#/c/81373/) to limit psutil be 2.0.0.
2.0.0 just came out a couple weeks ago, and breaks the API in a major way.  
Until we can port our code to the
latest version, I suggest we limit the version of psutil to 1.x (currently 
there's a lower bound in the 1.x
range, just not an upper bound).

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Chris Friesen

On 03/25/2014 02:50 PM, Sangeeta Singh wrote:


What I am  trying to achieve is have two AZ that the user can select
during the boot but then have a default AZ which has the HV from both
AZ1 AND AZ2 so that when the user does not specify any AZ in the boot
command I scatter my VM on both the AZ in a balanced way.


I haven't actually tried it, but it might be worth configuring two 
different alternate availability zones each with half of the resources. 
 Then if the user doesn't specify a zone they just get the nova zone 
which should then balance by load.


My impression was that the support for multiple host aggregates were 
intended to support orthogonal groupings of of hosts.  So hosts with 
SSDs could be one group, and hosts with 10gig ethernet another, and 
hosts with lots of RAM another group, and in that case you could have 
hosts that are part of any or all of those groups.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

2014-03-26 Thread Sean Dague
On 03/26/2014 10:30 AM, Solly Ross wrote:
 Hi,
 I currently have a patch up for review 
 (https://review.openstack.org/#/c/81373/) to limit psutil be 2.0.0.
 2.0.0 just came out a couple weeks ago, and breaks the API in a major way.  
 Until we can port our code to the
 latest version, I suggest we limit the version of psutil to 1.x (currently 
 there's a lower bound in the 1.x
 range, just not an upper bound).

Which code will be broken by this if it's not done? Is there an RC bug
tracking it?

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-26 Thread Susanne Balle
Jorge: I agree with you around ensuring different drivers support the API
contract and the no vendor lock-in.

All: How do we move this forward? It sounds like we have agreement that
this is worth investigating.

How do we move forward with the investigation and how to best architect
this? Is this a topic for tomorrow's LBaaS weekly meeting? or should I
schedule a hang-out meeting for us to discuss?

Susanne




On Tue, Mar 25, 2014 at 6:16 PM, Jorge Miramontes 
jorge.miramon...@rackspace.com wrote:

   Hey Susanne,

  I think it makes sense to group drivers by each LB software. For
 example, there would be a driver for HAProxy, one for Citrix's Netscalar,
 one for Riverbed's Stingray, etc. One important aspect about Openstack that
 I don't want us to forget though is that a tenant should be able to move
 between cloud providers at their own will (no vendor lock-in). The API
 contract is what allows this. The challenging aspect is ensuring different
 drivers support the API contract in the same way. What components should
 drivers share is also and interesting conversation to be had.

  Cheers,
 --Jorge

   From: Susanne Balle sleipnir...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, March 25, 2014 6:59 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
 managed services

   John, Brandon,

 I agree that we cannot have a multitude of drivers doing the same thing or
 close to because then we end-up in the same situation as we are today where
 we have duplicate effort and technical debt.

  The goal would be here to be able to built a framework around the
 drivers that would allow for resiliency, failover, etc...

  If the differentiators are in higher level APIs then we can have one a
 single driver (in the best case) for each software LB e.g. HA proxy, nginx,
 etc.

  Thoughts?

  Susanne


 On Mon, Mar 24, 2014 at 11:26 PM, John Dewey j...@dewey.ws wrote:

 I have a similar concern.  The underlying driver may support different
 functionality, but the differentiators need exposed through the top level
 API.

  I see the SSL work is well underway, and I am in the process of
 defining L7 scripting requirements.  However, I will definitely need L7
 scripting prior to the API being defined.
 Is this where vendor extensions come into play?  I kinda like the route
 the Ironic guy safe taking with a vendor passthru API.

  John

 On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:

   Creating a separate driver for every new need brings up a concern I
 have had.  If we are to implement a separate driver for every need then the
 permutations are endless and may cause a lot drivers and technical debt.
  If someone wants an ha-haproxy driver then great.  What if they want it to
 be scalable and/or HA, is there supposed to be scalable-ha-haproxy,
 scalable-haproxy, and ha-haproxy drivers?  Then what if instead of doing
 spinning up processes on the host machine we want a nova VM or a container
 to house it?  As you can see the permutations will begin to grow
 exponentially.  I'm not sure there is an easy answer for this.  Maybe I'm
 worrying too much about it because hopefully most cloud operators will use
 the same driver that addresses those basic needs, but worst case scenarios
 we have a ton of drivers that do a lot of similar things but are just
 different enough to warrant a separate driver.
  --
 *From:* Susanne Balle [sleipnir...@gmail.com]
 *Sent:* Monday, March 24, 2014 4:59 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
 managed services

   Eugene,

  Thanks for your comments,

  See inline:

  Susanne


  On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov 
 enikano...@mirantis.com wrote:

  Hi Susanne,

  a couple of comments inline:




 We would like to discuss adding the concept of managed services to the
 Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
 proxy. The latter could be a second approach for some of the software
 load-balancers e.g. HA proxy since I am not sure that it makes sense to
 deploy Libra within Devstack on a single VM.



 Currently users would have to deal with HA, resiliency, monitoring and
 managing their load-balancers themselves.  As a service provider we are
 taking a more managed service approach allowing our customers to consider
 the LB as a black box and the service manages the resiliency, HA,
 monitoring, etc. for them.



   As far as I understand these two abstracts, you're talking about
 making LBaaS API more high-level than it is right now.
 I think that was not on our roadmap because another project (Heat) is
 taking care of more abstracted service.
 The LBaaS goal is to provide vendor-agnostic 

Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests

2014-03-26 Thread stuart . mclaren

Thanks Chris.

Sounds like you're saying building out the apache element may be a sensible
next step?

-Stuart


Hi

We don't have a strong attachment to stunnel though, I quickly dropped it in 
front of our CI/CD undercloud and Rob wrote the element so we could repeat the 
deployment.

In the fullness of time I would expect there to exist elements for several SSL 
terminators, but we shouldn't necessarily stick with stunnel because it 
happened to be the one I was most familiar with :)

I would think that an httpd would be a good option to go with as the default, 
because I tend to think that we'll need an httpd running/managing the python 
code by default.

Cheers,
--
Chris Jones


On 26 Mar 2014, at 13:49, stuart.mclaren at hp.com wrote:

Just spotted the openstack-ssl element which uses 'stunnel'...



On Wed, 26 Mar 2014, stuart.mclaren at hp.com wrote:

All,

I know there's a preference for using a proxy to terminate
SSL connections rather than using the native python code.

There's a good write up of configuring the various proxies here:

http://docs.openstack.org/security-guide/content/ch020_ssl-everywhere.html

If we're not using native python SSL termination in TripleO we'll
need to pick which one of these would be a reasonable choice for
initial https support.

Pound may be a good choice -- its lightweight (6,000 lines of C),
easy to configure and gives good control over the SSL connections (ciphers etc).
Plus, we've experience with pushing large (GB) requests through it.

I'm interested if others have a strong preference for one of the other
options (stud, nginx, apache) and if so, what are the reasons you feel it
would make a better choice for a first implementation.

Thanks,

-Stuart


___
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SR-IOV and IOMMU check

2014-03-26 Thread 张磊强
I think it can be used as a capability of the host.

How do you think If regarding it as one type of the HostSate, and return
it in the nova.scheduler.host_manager.HostManager:get_all_host_states
method ?


2014-03-26 11:09 GMT+08:00 Gouzongmei gouzong...@huawei.com:

  Hi, Yang, Yi y



 Agree with you,  IOMMU and SR-IOV need to be checked beforehand.



 I think it should be checked before booting a instance with the pci
 flavor, that means when the flavor contains some normal pci cards or SR-IOV
 cards. Just like when you find there are pci_requests in the instance
 system_metadata.



 The details are out of my current knows.



 Hope can help you.

 *From:* Yang, Yi Y [mailto:yi.y.y...@intel.com]
 *Sent:* Wednesday, March 26, 2014 10:51 AM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] SR-IOV and IOMMU check



 Hi, all



 Currently openstack can support SR-IOV device pass-through (at least there
 are some patches for this), but the prerequisite to this is both IOMMU and
 SR-IOV must be enabled correctly, it seems there is not a robust way to
 check this in openstack, I have implemented a way to do this and hope it
 can be committed into upstream, this can help find the issue beforehand,
 instead of letting kvm report the issue no IOMMU found until the VM is
 started. I didn't find an appropriate place to put into this, do you think
 this is necessary? Where can it be put into? Welcome your advice and thank
 you in advance.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-26 Thread Thomas Spatzier
Hi Dimitry,

the current working draft for the simplified profile in YAML is available
at [1]. Note that this is still work in progress, but should already give a
good impression of where we want to go. And as I said, we are open for
input.

The stackforge project [2] that Sahdev from our team created is in its
final setup phase (gerrit review still has to be set up), as far as I
understood it. My information is that parser code for TOSCA YAML according
to the current working draft is going to get in by end of the week. This
code is currently maintained in Sahdev's own github repo [3]. Sahdev (IRC
spzala) would be the best contact for the moment when it comes to detail
questions on the code.

[1]
https://www.oasis-open.org/committees/document.php?document_id=52571wg_abbrev=tosca
[2] https://github.com/stackforge/heat-translator
[3] https://github.com/spzala/heat-translator

Regards,
Thomas

 From: Dmitry mey...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 26/03/2014 11:17
 Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

 Hi Thomas,
 Can you share some documentation of what you're doing right now with
 TOSCA-compliant layer?
 We would like to join to this effort.

 Thanks,
 Dmitry


 On Wed, Mar 26, 2014 at 10:38 AM, Thomas Spatzier
thomas.spatz...@de.ibm.com
  wrote:
 Excerpt from Zane Bitter's message on 26/03/2014 02:26:42:

  From: Zane Bitter zbit...@redhat.com
  To: openstack-dev@lists.openstack.org
  Date: 26/03/2014 02:27
  Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
 

 snip

   Cloud administrators are usually technical guys that are capable of
   learning HOT and writing YAML templates. They know exact
configuration
   of their cloud (what services are available, what is the version of
   OpenStack cloud is running) and generally understands how OpenStack
   works. They also know about software they intent to install. If such
 guy
   wants to install Drupal he knows exactly that he needs HOT template
   describing Fedora VM with Apache + PHP + MySQL + Drupal itself. It is
   not a problem for him to write such HOT template.
 
  I'm aware that TOSCA has these types of constraints, and in fact I
  suggested to the TOSCA TC that maybe this is where we should draw the
  line between Heat and some TOSCA-compatible service: HOT should be a
  concrete description of exactly what you're going to get, whereas some
  other service (in this case Murano) would act as the constraints
solver.
  e.g. something like an image name would not be hardcoded in a Murano
  template, you have some constraints about which operating system and
  what versions should be allowed, and it would pick one and pass it to
  Heat. So I am interested in this approach.

 I can just support Zane's statements above. We are working on exactly
those
 issues in the TOSCA YAML definition, so it would be ideal to just
 collaborate on this. As Zane said, there currently is a thinking that
some
 TOSCA-compliant layer could be a (maybe thin) layer above Heat that
 resolves a more abstract (thus more portable) template into something
 concrete, executable. We have started developing code (early versions are
 on stackforge already) to find out the details.

 
  The worst outcome here would be to end up with something that was
  equivalent to TOSCA but not easily translatable to the TOSCA Simple
  Profile YAML format (currently a Working Draft). Where 'easily
  translatable' preferably means 'by just changing some names'. I can't
  comment on whether this is the case as things stand.
 

 The TOSCA Simple Profile in YAML is a working draft at the moment, so we
 are pretty much open for any input. So let's see to get the right folks
 together and get it right. Since the Murano folks have indicated before
 that they are evaluating the option to join the OASIS TC, I am optimistic
 that we can get the streams together. Having implementation work going on
 here in this community in parallel to the standards work, and both
streams
 inspiring each other, will be fun :-)


 Regards,
 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Nova] Havana virtio_blk | kvm, kernel panic on VM boot

2014-03-26 Thread Ben Nemec
This is a development list and your question sounds like a usage one. 
Please try asking on the users list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Thanks.

-Ben

On 03/26/2014 04:24 AM, saurabh agarwal wrote:

I have compiled a new linux kernel with CONFIG_VIRTIO_BLK=y. But it
doesn't boot and kernel panics. To kernel command line i tried passing
root=/dev/vda and root=/dev/vda1 but same kernel panic comes every time.
VIRTIO_NET was working fine when VIRTIO_BLK was not enabled and VM
booted up fine. But with virtio-blk i see the below kernel panic. Can
someone please suggest what could be going wrong?

VFS: Cannot open root device vda or unknown-block(253,0)^M
Please append a correct root= boot option; here are the available
partitions:^M
fd00 8388608 vda  driver: virtio_blk^M
   fd01 7340032 vda1 ----^M
   fd02  512000 vda2 ----^M
   fd03  535552 vda3 ----^M
Kernel panic - not syncing: VFS: Unable to mount root fs on
unknown-block(253,0)^M

Regards,
Saurabh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Setting up a new meeting in openstack-meeting*

2014-03-26 Thread Thierry Carrez
Shaunak Kashyap wrote:
 We are looking to setup a recurring IRC meeting in one of the 
 openstack-meeting* rooms for the 
 https://wiki.openstack.org/wiki/OpenStack-SDK-PHP project. What is the 
 process for setting this up (i.e. “reserving” a room for a particular time 
 slot each week)?
 
 Is there a formal request to be made somewhere OR could I find an open slot 
 and add our meeting to https://wiki.openstack.org/wiki/Meetings?

Just add it to https://wiki.openstack.org/wiki/Meetings

 If it is the latter, how do we get the meeting added to 
 https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics?

I'm subscribed to changes on the wiki page and do my best reflecting
them on the Google calendar.

We are looking into replacing this human-intensive process with
something more automated, but it's not done yet.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-26 Thread Vishvananda Ishaya
This all makes sense to me. I would suggest a blueprint and a pull request as 
soon as Juno opens.

Vish

On Mar 25, 2014, at 8:07 AM, Shlomi Sasson shlo...@mellanox.com wrote:

 Hi,
  
 I want to share with the community the following challenge:
 Currently, Vendors who have their iSCSI driver, and want to add RDMA 
 transport (iSER), cannot leverage their existing plug-in which inherit from 
 iSCSI
 And must modify their driver or create an additional plug-in driver which 
 inherit from iSER, and copy the exact same code.
  
 Instead I believe a simpler approach is to add a new attribute to ISCSIDriver 
 to support other iSCSI transports besides TCP, which will allow minimal 
 changes to support iSER.
 The existing ISERDriver code will be removed, this will eliminate significant 
 code and class duplication, and will work with all the iSCSI vendors who 
 supports both TCP and RDMA without the need to modify their plug-in drivers.
  
 To achieve that both cinder  nova requires slight changes:
 For cinder, I wish to add a  parameter called “transport” (default to iscsi) 
 to distinguish between the transports and use the existing “iscsi_ip_address” 
 parameter for any transport type connection.
 For nova, I wish to add a parameter called “default_rdma” (default to false) 
 to enable initiator side.
 The outcome will avoid code duplication and the need to add more classes.
  
 I am not sure what will be the right approach to handle this, I already have 
 the code, should I open a bug or blueprint to track this issue?
  
 Best Regards,
 Shlomi
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze][horizon] Exception for lesscpy=0.10.1

2014-03-26 Thread Sascha Peilicke
On Wednesday 26 March 2014 13:49:48 Julie Pichon wrote:
 On 26/03/14 12:14, Sascha Peilicke wrote:
  Hi,
  
  there's been a review up for some time [0] that wants to raise the version
  of lesscpy to 0.10.1. It's specific to horizon and contains some
  important fixes that we'll likely want to include. So I'd like to ask for
  an exception for this one.
  
  [0] https://review.openstack.org/#/c/70619/
 
 The review comments indicate this is needed for Bootstrap 3, which was
 deferred to Juno. Is it needed for something else in Icehouse? Is
 something broken without it? If not, it seems to me it's probably ok to
 merge only at the beginning of Juno.
 
 Otherwise, apparently I've been running with 0.10.1 in some of my VMs
 for a few weeks and didn't notice any problems, FWIW. It also has the
 advantage of being a proper release as opposed to beta.

Would work for me too, let's defer it then.

-- 
Viele Grüße,
Sascha Peilicke

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Neutron + Nova + OVS security group fix

2014-03-26 Thread Salvatore Orlando
Aplogies for double-posting.

I've commented again regarding the alternatives to the solution currently
on review.
Please find more info on the bug report [1]

I've attached patches for a hackish but (imo) cleaner solution, and patches
for a compact fix along the lines of what I and Akihiro were saying.

Salvatore

[1] https://bugs.launchpad.net/neutron/+bug/1297469


On 26 March 2014 09:02, Salvatore Orlando sorla...@nicira.com wrote:

 The thread branched, and it's getting long.
 I'm trying to summarize the discussion for other people to quickly catch
 up.

 - The bug being targeted is
 https://bugs.launchpad.net/neutron/+bug/1297469
 It has also been reported as
 https://bugs.launchpad.net/neutron/+bug/1252620 and as
 https://bugs.launchpad.net/nova/+bug/1248859
 The fix for bug 1112912 had a fix for it.
 - The problem is the generic VIF driver does not perform hybrid plugging
 which is required by Neutron when running with ML2 plugin and OVS mech
 driver
 - The proposed patches (#21946 and #44596) are however very unlikely to
 merge in icehouse
 - An alternative approach has been proposed (
 https://review.openstack.org/#/c/82904/); this will 'specialize' the
 GenericVIF driver for use with neutron.
 It is meant to be a temporary workaround pending permanent solution. It's
 not adding conf variables, but has probably docImpact.
 If that works for nova core, that works for me as well
 - An idea regarding leveraging VIF_TYPE and fixing the issue has been
 launched. This will constitute a fix which might be improved in the future,
 and is still small and targeted. However we still need to look at the issue
 Nachi's pointing out regarding the fact that a libvirt network filter name
 should not be added in guest config.

 Salvatore


 On 26 March 2014 05:57, Akihiro Motoki mot...@da.jp.nec.com wrote:

 Hi Nachi and the teams,

 (2014/03/26 9:57), Salvatore Orlando wrote:
  I hope we can sort this out on the mailing list IRC, without having to
 schedule emergency meetings.
 
  Salvatore
 
  On 25 March 2014 22:58, Nachi Ueno na...@ntti3.com mailto:
 na...@ntti3.com wrote:
 
  Hi Nova, Neturon Team
 
  I would like to discuss issue of Neutron + Nova + OVS security
 group fix.
  We have a discussion in IRC today, but the issue is complicated so
 we will have
  a conf call tomorrow 17:00 UST (10AM PDT). #openstack-neutron
 
  (I'll put conf call information in IRC)
 
 
  thanks, but I'd prefer you discuss the matter on IRC.
  I won't be available at that time and having IRC logs on eavesdrop will
 allow me to catch up without having to ask people or waiting for minutes on
 the mailing list.

 I can't join the meeting too. It is midnight.

 
  -- Please let me know if this time won't work with you.
 
  Bug Report
  https://bugs.launchpad.net/neutron/+bug/1297469
 
  Background of this issue:
  ML2 + OVSDriver + IptablesBasedFirewall combination is a default
  plugin setting in the Neutron.
  In this case, we need a special handing in VIF. Because OpenVSwitch
  don't support iptables, we are
  using linuxbride + openvswitch bridge. We are calling this as
 hybrid driver.
 
 
  The hybrid solution in Neutron has been around for such a long time
 that I would hardly call it a special handling.
  To summarize, the VIF is plugged into a linux bridge, which has another
 leg plugged in the OVS integration bridge.
 
  On the other discussion, we generalized the Nova  side VIF plugging
 to
  the Libvirt GenericVIFDriver.
  The idea is let neturon tell the VIF plugging configration details
 to
  the GenericDriver, and GerericDriver
  takes care of it.
 
 
  The downside of the generic driver is that so far it's assuming local
 configuration values are sufficient to correctly determine VIF plugging.
  The generic VIF driver will use the hybrid driver if
 get_firewall_required is true. And this will happen if the firewall driver
 is anything different from the NoOp driver.
  This was uncovered by a devstack commit (1143f7e). When I previously
 discussed with the people involved this issue, I was under the impression
 that the devstack patch introduced the problem.
  Apparently the Generic VIF driver is not taking at the moments hints
 from neutron regarding the driver to use, and therefore, from what I
 gather, makes a decision based on nova conf flags only.
  So a quick fix would be to tell the Generic VIF driver to always use
 hybrid plugging when neutron is enabled (which can be gathered by nova conf
 flags).
  This will fix the issue for ML2, but will either break or insert an
 unnecessary extra hop for other plugins.

 When the generic VIF driver is introduced, OVS VIF driver and the hybrid
 VIF driver are
 considered same as e as both are pluggged into OVS and the hybrid driver
 is implemeted
 as a variation of OVS driver, but the thing is not so simple than the
 first thought.
 The hybrid driver solution lives such a long time and IMO the 

Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-26 Thread Dmitry
Thank you very much for the info!


On Wed, Mar 26, 2014 at 6:07 PM, Thomas Spatzier thomas.spatz...@de.ibm.com
 wrote:

 Hi Dimitry,

 the current working draft for the simplified profile in YAML is available
 at [1]. Note that this is still work in progress, but should already give a
 good impression of where we want to go. And as I said, we are open for
 input.

 The stackforge project [2] that Sahdev from our team created is in its
 final setup phase (gerrit review still has to be set up), as far as I
 understood it. My information is that parser code for TOSCA YAML according
 to the current working draft is going to get in by end of the week. This
 code is currently maintained in Sahdev's own github repo [3]. Sahdev (IRC
 spzala) would be the best contact for the moment when it comes to detail
 questions on the code.

 [1]

 https://www.oasis-open.org/committees/document.php?document_id=52571wg_abbrev=tosca
 [2] https://github.com/stackforge/heat-translator
 [3] https://github.com/spzala/heat-translator

 Regards,
 Thomas

  From: Dmitry mey...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 26/03/2014 11:17
  Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
 
  Hi Thomas,
  Can you share some documentation of what you're doing right now with
  TOSCA-compliant layer?
  We would like to join to this effort.
 
  Thanks,
  Dmitry
 

  On Wed, Mar 26, 2014 at 10:38 AM, Thomas Spatzier
 thomas.spatz...@de.ibm.com
   wrote:
  Excerpt from Zane Bitter's message on 26/03/2014 02:26:42:
 
   From: Zane Bitter zbit...@redhat.com
   To: openstack-dev@lists.openstack.org
   Date: 26/03/2014 02:27
   Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
  
 
  snip
 
Cloud administrators are usually technical guys that are capable of
learning HOT and writing YAML templates. They know exact
 configuration
of their cloud (what services are available, what is the version of
OpenStack cloud is running) and generally understands how OpenStack
works. They also know about software they intent to install. If such
  guy
wants to install Drupal he knows exactly that he needs HOT template
describing Fedora VM with Apache + PHP + MySQL + Drupal itself. It is
not a problem for him to write such HOT template.
  
   I'm aware that TOSCA has these types of constraints, and in fact I
   suggested to the TOSCA TC that maybe this is where we should draw the
   line between Heat and some TOSCA-compatible service: HOT should be a
   concrete description of exactly what you're going to get, whereas some
   other service (in this case Murano) would act as the constraints
 solver.
   e.g. something like an image name would not be hardcoded in a Murano
   template, you have some constraints about which operating system and
   what versions should be allowed, and it would pick one and pass it to
   Heat. So I am interested in this approach.

  I can just support Zane's statements above. We are working on exactly
 those
  issues in the TOSCA YAML definition, so it would be ideal to just
  collaborate on this. As Zane said, there currently is a thinking that
 some
  TOSCA-compliant layer could be a (maybe thin) layer above Heat that
  resolves a more abstract (thus more portable) template into something
  concrete, executable. We have started developing code (early versions are
  on stackforge already) to find out the details.
 
  
   The worst outcome here would be to end up with something that was
   equivalent to TOSCA but not easily translatable to the TOSCA Simple
   Profile YAML format (currently a Working Draft). Where 'easily
   translatable' preferably means 'by just changing some names'. I can't
   comment on whether this is the case as things stand.
  

  The TOSCA Simple Profile in YAML is a working draft at the moment, so we
  are pretty much open for any input. So let's see to get the right folks
  together and get it right. Since the Murano folks have indicated before
  that they are evaluating the option to join the OASIS TC, I am optimistic
  that we can get the streams together. Having implementation work going on
  here in this community in parallel to the standards work, and both
 streams
  inspiring each other, will be fun :-)
 
 
  Regards,
  Thomas
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list

Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Vishvananda Ishaya
Personally I view this as a bug. There is no reason why we shouldn’t support 
arbitrary grouping of zones. I know there is at least one problem with zones 
that overlap regarding displaying them properly:

https://bugs.launchpad.net/nova/+bug/1277230

There is probably a related issue that is causing the error you see below. IMO 
both of these should be fixed. I also think adding a compute node to two 
different aggregates with azs should be allowed.

It also might be nice to support specifying multiple zones in the launch 
command in these models. This would allow you to limit booting to an 
intersection of two overlapping zones.

A few examples where these ideas would be useful:

1. You have 3 racks of servers and half of the nodes from each rack plugged 
into a different switch. You want to be able to specify to spread across racks 
or switches via an AZ. In this model you could have a zone for each switch and 
a zone for each rack.

2. A single cloud has 5 racks in one room in the datacenter and 5 racks in a 
second room. You’d like to give control to the user to choose the room or 
choose the rack. In this model you would have one zone for each room, and 
smaller zones for each rack.

3. You have a small 3 rack cloud and would like to ensure that your production 
workloads don’t run on the same machines as your dev workloads, but you also 
want to use zones spread workloads across the three racks. Similarly to 1., you 
could split your racks in half via dev and prod zones. Each one of these zones 
would overlap with a rack zone.

You can achieve similar results in these situations by making small zones 
(switch1-rack1 switch1-rack2 switch1-rack3 switch2-rack1 switch2-rack2 
switch2-rack3) but that removes the ability to decide to launch something with 
less granularity. I.e. you can’t just specify ‘switch1' or ‘rack1' or ‘anywhere’

I’d like to see all of the following work
nova boot … (boot anywhere)
nova boot —availability-zone switch1 … (boot it switch1 zone)
nova boot —availability-zone rack1 … (boot in rack1 zone)
nova boot —availability-zone switch1,rack1 … (boot

Vish

On Mar 25, 2014, at 1:50 PM, Sangeeta Singh sin...@yahoo-inc.com wrote:

 Hi,
 
 The availability Zones filter states that theoretically a compute node can be 
 part of multiple availability zones. I have a requirement where I need to 
 make a compute node part to 2 AZ. When I try to create a host aggregates with 
 AZ I can not add the node in two host aggregates that have AZ defined. 
 However if I create a host aggregate without associating an AZ then I can add 
 the compute nodes to it. After doing that I can update the host-aggregate an 
 associate an AZ. This looks like a bug.
 
 I can see the compute node to be listed in the 2 AZ with the 
 availability-zone-list command.
 
 The problem that I have is that I can still not boot a VM on the compute node 
 when I do not specify the AZ in the command though I have set the default 
 availability zone and the default schedule zone in nova.conf.
 
 I get the error “ERROR: The requested availability zone is not available”
 
 What I am  trying to achieve is have two AZ that the user can select during 
 the boot but then have a default AZ which has the HV from both AZ1 AND AZ2 so 
 that when the user does not specify any AZ in the boot command I scatter my 
 VM on both the AZ in a balanced way.
 
 Any pointers.
 
 Thanks,
 Sangeeta
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests

2014-03-26 Thread Clint Byrum
Excerpts from Chris Jones's message of 2014-03-26 06:58:59 -0700:
 Hi
 
 We don't have a strong attachment to stunnel though, I quickly dropped it in 
 front of our CI/CD undercloud and Rob wrote the element so we could repeat 
 the deployment.
 
 In the fullness of time I would expect there to exist elements for several 
 SSL terminators, but we shouldn't necessarily stick with stunnel because it 
 happened to be the one I was most familiar with :)
 
 I would think that an httpd would be a good option to go with as the default, 
 because I tend to think that we'll need an httpd running/managing the python 
 code by default.
 

I actually think that it is important to separate SSL termination from
the app server. In addition to reasons of scale (SSL termination scales
quite a bit differently than app serving), there is a security implication
in having the private SSL keys on the same box that runs the app.

So if we use apache for running the python app servers, that is not a
reason to also use apache for SSL. Quite the opposite I think.

As far as which is best.. there are benefits and drawbacks for all of
them, and it is modular enough that we can just stick with stunnel and
users who find problems with it can switch it out without too much hassle.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins test logs and their retention period

2014-03-26 Thread Doug Hellmann
On Tue, Mar 25, 2014 at 5:34 PM, Brant Knudson b...@acm.org wrote:




 On Mon, Mar 24, 2014 at 5:49 AM, Sean Dague s...@dague.net wrote:

 ...

 Part of the challenge is turning off DEBUG is currently embedded in code
 in oslo log, which makes it kind of awkward to set sane log levels for
 included libraries because it requires an oslo round trip with code to
 all the projects to do it.


 Here's how it's done in Keystone:
 https://review.openstack.org/#/c/62068/10/keystone/config.py

 It's definitely awkward.


https://bugs.launchpad.net/oslo/+bug/1297950

Doug





 - Brant


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-26 Thread Doug Hellmann
On Tue, Mar 25, 2014 at 11:41 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Dolph Mathews's message of 2014-03-25 19:01:17 -0700:
  On Tue, Mar 25, 2014 at 5:50 PM, Russell Bryant rbry...@redhat.com
 wrote:
 
   We discussed the deprecation of the v2 keystone API in the
 cross-project
   meeting today [1].  This thread is to recap and bring that discussion
 to
   some consensus.
  
   The issue is that Keystone has marked the v2 API as deprecated in
 Icehouse:
  
   https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api
  
   If you use the API, deployments will get this in their logs:
  
   WARNING keystone.openstack.common.versionutils [-] Deprecated: v2 API
 is
   deprecated as of Icehouse in favor of v3 API and may be removed in K.
  
   The deprecation status is reflected in the API for end users, as well.
   For example, from the CLI:
  
 $ keystone discover
 Keystone found at http://172.16.12.38:5000/v2.0
   - supports version v2.0 (deprecated) here
   http://172.16.12.38:5000/v2.0/
  
   My proposal is that this deprecation be reverted.  Here's why:
  
   First, it seems there isn't a common use of deprecated.  To me,
   marking something deprecated means that the deprecated feature:
  
- has been completely replaced by something else
  
 
- end users / deployers should take action to migrate to the
  new thing immediately.
  
 
- The project has provided a documented migration path
 
- the old thing will be removed at a specific date/release
  
 
  Agree on all points. Unfortunately, we have yet to succeed on the
  documentation front:
 
 
 
 https://blueprints.launchpad.net/keystone/+spec/document-v2-to-v3-transition
 
  
   The problem with the above is that most OpenStack projects do not
   support the v3 API yet.
  
   From talking to Dolph in the meeting, it sounds like the intention is:
  
- fully support v2, just don't add features
  
- signal to other projects that they should be migrating to v3
  
 
  Above all else, this was our primary goal: to raise awareness about our
  path forward, and to identify the non-obvious stakeholders that we needed
  to work with in order to drop support for v2. With today's discussion as
  evidence, I think we've succeeded in that regard :)
 
  
   Given that intention, I believe the proper thing to do is to actually
   leave the API marked as fully supported / stable.  Keystone should be
   working with other OpenStack projects to migrate them to v3.  Once that
   is complete, deprecation can be re-visited.
  
 
  Happy to!
 
  Revert deprecation of the v2 API:
 https://review.openstack.org/#/c/82963/
 
  Although I'd prefer to apply this patch directly to milestone-proposed,
 so
  we can continue into Juno with the deprecation in master.
 

 As somebody maintaining a few master-chasing CD clouds, I'd like to ask
 you to please stop the squawking about deprecation until it has a definite
 replacement and most if not all OpenStack core projects are using it.

 1 out of every 2 API calls on these clouds produces one of these errors
 in Keystone. That is just pointless. :-P


This is a good point. The other thing we discussed was whether it is
appropriate to announce deprecation in this way. I'm not sure that
logging *inside* the service is useful, but we don't yet have a way to
announce to the client that the call invoked is deprecated. We talked about
having a cross-project session at the summit, and collaborating with the
SDK team, to brainstorm solutions to that problem.

Doug




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

2014-03-26 Thread Solly Ross
Code which breaks:

Glance's mutliprocessing tests will break (the reason we should limit it now).
For the future, people attempting to use psutil will have no clear version 
target
(Either they use 1.x and break with the people who install the latest version 
from pip,
of they use 2.0.0 and break with everything that doesn't use the latest 
version).

psutil's API is extremely unstable -- it has undergone major revisions going 
from 0.x to 1.x, and now
1.x to 2.0.0.  Limiting psutil explicitly to a single major version (it was 
more or less implicitly limited
before, since there was no version major version above 1) ensures that the 
requirements.txt file actually
indicates what is necessary to use OpenStack.

The alternative option would be to update the glance tests, but my concern is 
that 2.0.0 is not available
from the package managers of most distros yet.

Best Regards,
Solly Ross

- Original Message -
From: Sean Dague s...@dague.net
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, March 26, 2014 10:39:41 AM
Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

On 03/26/2014 10:30 AM, Solly Ross wrote:
 Hi,
 I currently have a patch up for review 
 (https://review.openstack.org/#/c/81373/) to limit psutil be 2.0.0.
 2.0.0 just came out a couple weeks ago, and breaks the API in a major way.  
 Until we can port our code to the
 latest version, I suggest we limit the version of psutil to 1.x (currently 
 there's a lower bound in the 1.x
 range, just not an upper bound).

Which code will be broken by this if it's not done? Is there an RC bug
tracking it?

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests

2014-03-26 Thread Chris Jones
Hi

On 26 March 2014 16:51, Clint Byrum cl...@fewbar.com wrote:

 quite a bit differently than app serving), there is a security implication
 in having the private SSL keys on the same box that runs the app.


This is a very good point, thanks :)

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Chris Friesen

On 03/26/2014 10:47 AM, Vishvananda Ishaya wrote:

Personally I view this as a bug. There is no reason why we shouldn’t
support arbitrary grouping of zones. I know there is at least one
problem with zones that overlap regarding displaying them properly:

https://bugs.launchpad.net/nova/+bug/1277230


There's also this bug that I reported a while back, where nova will let 
you specify multiple host aggregates with the same zone name.


https://bugs.launchpad.net/nova/+bug/1213224

If the end user then specifies the availability zone for an instance, it 
is unspecified which aggregate will be used.



There is probably a related issue that is causing the error you see
below. IMO both of these should be fixed. I also think adding a compute
node to two different aggregates with azs should be allowed.

It also might be nice to support specifying multiple zones in the launch
command in these models. This would allow you to limit booting to an
intersection of two overlapping zones.


Totally agree.  I actually mention both of these cases in the comments 
for the bug above.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Khanh-Toan Tran


- Original Message -
 From: Sangeeta Singh sin...@yahoo-inc.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, March 25, 2014 9:50:00 PM
 Subject: [openstack-dev] [nova][scheduler] Availability Zones and Host 
 aggregates..
 
 Hi,
 
 The availability Zones filter states that theoretically a compute node can be
 part of multiple availability zones. I have a requirement where I need to
 make a compute node part to 2 AZ. When I try to create a host aggregates
 with AZ I can not add the node in two host aggregates that have AZ defined.
 However if I create a host aggregate without associating an AZ then I can
 add the compute nodes to it. After doing that I can update the
 host-aggregate an associate an AZ. This looks like a bug.
 
 I can see the compute node to be listed in the 2 AZ with the
 availability-zone-list command.
 

Yes it appears a bug to me (apparently the AZ metadata indertion is considered 
as a normal metadata so no check is done), and so does the message in the 
AvailabilityZoneFilter. I don't know why you need a compute node that belongs 
to 2 different availability-zones. Maybe I'm wrong but for me it's logical that 
availability-zones do not share the same compute nodes. The 
availability-zones have the role of partition your compute nodes into zones 
that are physically separated (in large term it would require separation of 
physical servers, networking equipments, power sources, etc). So that when user 
deploys 2 VMs in 2 different zones, he knows that these VMs do not fall into a 
same host and if some zone falls, the others continue working, thus the client 
will not lose all of his VMs. It's smaller than Regions which ensure total 
separation at the cost of low-layer connectivity and central management (e.g. 
scheduling per region).

See: http://www.linuxjournal.com/content/introduction-openstack

The former purpose of regouping hosts with the same characteristics is ensured 
by host-aggregates.

 The problem that I have is that I can still not boot a VM on the compute node
 when I do not specify the AZ in the command though I have set the default
 availability zone and the default schedule zone in nova.conf.
 
 I get the error “ERROR: The requested availability zone is not available”
 
 What I am  trying to achieve is have two AZ that the user can select during
 the boot but then have a default AZ which has the HV from both AZ1 AND AZ2
 so that when the user does not specify any AZ in the boot command I scatter
 my VM on both the AZ in a balanced way.
 

I do not understand your goal. When you create two availability-zones and put 
ALL of your compute nodes into these AZs, then if you don't specifies the AZ in 
your request, then AZFilter will automatically accept all hosts.
The defaut weigher (RalWeigher) will then distribute the workload fairely among 
these nodes regardless of AZ it belongs to. Maybe it is what you want?

 Any pointers.
 
 Thanks,
 Sangeeta
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Setting up a new meeting in openstack-meeting*

2014-03-26 Thread Shaunak Kashyap
Thanks Thierry. I’ve added the meeting to the wiki page.

Shaunak

On Mar 26, 2014, at 12:25 PM, Thierry Carrez thie...@openstack.org wrote:

 Shaunak Kashyap wrote:
 We are looking to setup a recurring IRC meeting in one of the 
 openstack-meeting* rooms for the 
 https://wiki.openstack.org/wiki/OpenStack-SDK-PHP project. What is the 
 process for setting this up (i.e. “reserving” a room for a particular time 
 slot each week)?
 
 Is there a formal request to be made somewhere OR could I find an open slot 
 and add our meeting to https://wiki.openstack.org/wiki/Meetings?
 
 Just add it to https://wiki.openstack.org/wiki/Meetings
 
 If it is the latter, how do we get the meeting added to 
 https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics?
 
 I'm subscribed to changes on the wiki page and do my best reflecting
 them on the Google calendar.
 
 We are looking into replacing this human-intensive process with
 something more automated, but it's not done yet.
 
 Cheers,
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-26 Thread Eoghan Glynn


 On 3/25/2014 1:50 PM, Matt Wagner wrote:
  This would argue to me that the easiest thing for Ceilometer might be
  to query us for IPMI stats, if the credential store is pluggable.
  Fetch these bare metal statistics doesn't seem too off-course for
  Ironic to me. The alternative is that Ceilometer and Ironic would both
  have to be configured for the same pluggable credential store.
 
 There is already a blueprint with a proposed patch here for Ironic to do
 the querying:
 https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.

Yes, so I guess there are two fundamentally different approaches that
could be taken here:

1. ironic controls the cadence of IPMI polling, emitting notifications
   at whatever frequency it decides, carrying whatever level of
   detail/formatting it deems appropriate, which are then consumed by
   ceilometer which massages these provided data into usable samples

2. ceilometer acquires the IPMI credentials either via ironic or
   directly from keystone/barbican, before calling out over IPMI at
   whatever cadence it wants and transforming these raw data into
   usable samples

IIUC approach #1 is envisaged by the ironic BP[1].

The advantage of approach #2 OTOH is that ceilometer is in the driving
seat as far as cadence is concerned, and the model is far more
consistent with how we currently acquire data from the hypervisor layer
and SNMP daemons.

Cheers,
Eoghan


[1]  https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer
 
 I think, for terms of credential storage (and, for that matter, metrics
 gathering as I noted in that blueprint), it's very useful to have things
 pluggable. Ironic, in particular, has many different use cases: bare
 metal private cloud, bare metal public cloud, and triple-o. I could
 easily see all three being different enough to call for different forms
 of credential storage.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Sangeeta Singh


From: Baldassin, Santiago B 
santiago.b.baldas...@intel.commailto:santiago.b.baldas...@intel.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, March 26, 2014 at 5:17 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host 
aggregates..

I would say that the requirement is not valid. A host aggregate con only have 
one availability zone so what you actually can have is a compute node that’s 
part of 2 host aggregates, which actually have the same availability zone.

It is in our case where we have a superset host aggregate that has all the 
hosts and then we have subset host aggregates(AZ) based on PDU. Need is that 
our user can specify the AZ based on the PDU but also in case no AZ is 
specified we want to load balance from the superset which contains two 
host-aggregates(AZ).


In the scenario you mentioned below where you create the aggregates without 
associating the availability zone, after updating the aggregates with the 
zones, the hosts still share the same availability zone right?

No the host becomes part of two availability zones one for each of the host 
aggregates.

From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: Wednesday, March 26, 2014 8:47 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host 
aggregates..


Sounds like an extra weighter to try and balance load between your two AZs 
might be a nicer way to go.

The easiest way might be via cells, one for each AZ . But not sure we merged 
that support yet. But there are patches for that.

John
On 25 Mar 2014 20:53, Sangeeta Singh 
sin...@yahoo-inc.commailto:sin...@yahoo-inc.com wrote:
Hi,

The availability Zones filter states that theoretically a compute node can be 
part of multiple availability zones. I have a requirement where I need to make 
a compute node part to 2 AZ. When I try to create a host aggregates with AZ I 
can not add the node in two host aggregates that have AZ defined. However if I 
create a host aggregate without associating an AZ then I can add the compute 
nodes to it. After doing that I can update the host-aggregate an associate an 
AZ. This looks like a bug.

I can see the compute node to be listed in the 2 AZ with the 
availability-zone-list command.

The problem that I have is that I can still not boot a VM on the compute node 
when I do not specify the AZ in the command though I have set the default 
availability zone and the default schedule zone in nova.conf.

I get the error “ERROR: The requested availability zone is not available”

What I am  trying to achieve is have two AZ that the user can select during the 
boot but then have a default AZ which has the HV from both AZ1 AND AZ2 so that 
when the user does not specify any AZ in the boot command I scatter my VM on 
both the AZ in a balanced way.

Any pointers.

Thanks,
Sangeeta

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-26 Thread Jay Faulkner
Comments inline.

On 3/26/14, 10:28 AM, Eoghan Glynn wrote:

 On 3/25/2014 1:50 PM, Matt Wagner wrote:
 This would argue to me that the easiest thing for Ceilometer might be
 to query us for IPMI stats, if the credential store is pluggable.
 Fetch these bare metal statistics doesn't seem too off-course for
 Ironic to me. The alternative is that Ceilometer and Ironic would both
 have to be configured for the same pluggable credential store.
 There is already a blueprint with a proposed patch here for Ironic to do
 the querying:
 https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
 Yes, so I guess there are two fundamentally different approaches that
 could be taken here:

 1. ironic controls the cadence of IPMI polling, emitting notifications
at whatever frequency it decides, carrying whatever level of
detail/formatting it deems appropriate, which are then consumed by
ceilometer which massages these provided data into usable samples

 2. ceilometer acquires the IPMI credentials either via ironic or
directly from keystone/barbican, before calling out over IPMI at
whatever cadence it wants and transforming these raw data into
usable samples

 IIUC approach #1 is envisaged by the ironic BP[1].

 The advantage of approach #2 OTOH is that ceilometer is in the driving
 seat as far as cadence is concerned, and the model is far more
 consistent with how we currently acquire data from the hypervisor layer
 and SNMP daemons.
Approach #1 permits there to be possible other systems monitoring this
information. Many organizations already have significant hardware
monitoring systems setup, and would not like to replace them with
Ceilometer in order to monitor BMCs registered with Ironic.

I think, especially for Ironic, being able to play nicely with things
outside of Openstack is essential as most users aren't going to replace
their entire datacenter management toolset with Openstack... at least
not yet :).

Thanks,
Jay
 Cheers,
 Eoghan


 [1]  https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer
  
 I think, for terms of credential storage (and, for that matter, metrics
 gathering as I noted in that blueprint), it's very useful to have things
 pluggable. Ironic, in particular, has many different use cases: bare
 metal private cloud, bare metal public cloud, and triple-o. I could
 easily see all three being different enough to call for different forms
 of credential storage.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday March 25th at 19:00 UTC

2014-03-26 Thread Elizabeth Krumbach Joseph
On Mon, Mar 24, 2014 at 10:35 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 Hi everyone,

 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday March 25th, at 19:00 UTC in
 #openstack-meeting

Meeting minutes and logs from yesterday here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-25-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-25-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-25-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-26 Thread Russell Bryant
On 03/26/2014 12:56 PM, Doug Hellmann wrote:
 
 
 
 On Tue, Mar 25, 2014 at 11:41 PM, Clint Byrum cl...@fewbar.com
 1 out of every 2 API calls on these clouds produces one of these errors
 in Keystone. That is just pointless. :-P
 
 
 This is a good point. The other thing we discussed was whether it is
 appropriate to announce deprecation in this way. I'm not sure that
 logging *inside* the service is useful, but we don't yet have a way to
 announce to the client that the call invoked is deprecated. We talked
 about having a cross-project session at the summit, and collaborating
 with the SDK team, to brainstorm solutions to that problem.

Logging on half of the API calls is obviously bad, but logging a single
time seems reasonable.  Deployers need to know it's deprecated, too.  If
an API is going away, they have to make plays to test/deploy the new thing.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Sangeeta Singh


On 3/26/14, 10:17 AM, Khanh-Toan Tran khanh-toan.t...@cloudwatt.com
wrote:



- Original Message -
 From: Sangeeta Singh sin...@yahoo-inc.com
 To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
 Sent: Tuesday, March 25, 2014 9:50:00 PM
 Subject: [openstack-dev] [nova][scheduler] Availability Zones and Host
aggregates..
 
 Hi,
 
 The availability Zones filter states that theoretically a compute node
can be
 part of multiple availability zones. I have a requirement where I need
to
 make a compute node part to 2 AZ. When I try to create a host aggregates
 with AZ I can not add the node in two host aggregates that have AZ
defined.
 However if I create a host aggregate without associating an AZ then I
can
 add the compute nodes to it. After doing that I can update the
 host-aggregate an associate an AZ. This looks like a bug.
 
 I can see the compute node to be listed in the 2 AZ with the
 availability-zone-list command.
 

Yes it appears a bug to me (apparently the AZ metadata indertion is
considered as a normal metadata so no check is done), and so does the
message in the AvailabilityZoneFilter. I don't know why you need a
compute node that belongs to 2 different availability-zones. Maybe I'm
wrong but for me it's logical that availability-zones do not share the
same compute nodes. The availability-zones have the role of partition
your compute nodes into zones that are physically separated (in large
term it would require separation of physical servers, networking
equipments, power sources, etc). So that when user deploys 2 VMs in 2
different zones, he knows that these VMs do not fall into a same host and
if some zone falls, the others continue working, thus the client will not
lose all of his VMs. It's smaller than Regions which ensure total
separation at the cost of low-layer connectivity and central management
(e.g. scheduling per region).

The need arises when you need a way to use both the zones to be used
for scheduling when no specific zone is specified. The only way to do that
is either have a AZ which is a superset of the two AZ or the other way
could be if the default_scheduler_zone can take a list of zones instead of
just 1.  

See: http://www.linuxjournal.com/content/introduction-openstack

The former purpose of regouping hosts with the same characteristics is
ensured by host-aggregates.

 The problem that I have is that I can still not boot a VM on the
compute node
 when I do not specify the AZ in the command though I have set the
default
 availability zone and the default schedule zone in nova.conf.
 
 I get the error ³ERROR: The requested availability zone is not
available²
 
 What I am  trying to achieve is have two AZ that the user can select
during
 the boot but then have a default AZ which has the HV from both AZ1 AND
AZ2
 so that when the user does not specify any AZ in the boot command I
scatter
 my VM on both the AZ in a balanced way.
 

I do not understand your goal. When you create two availability-zones and
put ALL of your compute nodes into these AZs, then if you don't specifies
the AZ in your request, then AZFilter will automatically accept all hosts.
The defaut weigher (RalWeigher) will then distribute the workload fairely
among these nodes regardless of AZ it belongs to. Maybe it is what you
want?

 Any pointers.
 
 Thanks,
 Sangeeta
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Sangeeta Singh


On 3/26/14, 10:17 AM, Khanh-Toan Tran khanh-toan.t...@cloudwatt.com
wrote:



- Original Message -
 From: Sangeeta Singh sin...@yahoo-inc.com
 To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
 Sent: Tuesday, March 25, 2014 9:50:00 PM
 Subject: [openstack-dev] [nova][scheduler] Availability Zones and Host
aggregates..
 
 Hi,
 
 The availability Zones filter states that theoretically a compute node
can be
 part of multiple availability zones. I have a requirement where I need
to
 make a compute node part to 2 AZ. When I try to create a host aggregates
 with AZ I can not add the node in two host aggregates that have AZ
defined.
 However if I create a host aggregate without associating an AZ then I
can
 add the compute nodes to it. After doing that I can update the
 host-aggregate an associate an AZ. This looks like a bug.
 
 I can see the compute node to be listed in the 2 AZ with the
 availability-zone-list command.
 

Yes it appears a bug to me (apparently the AZ metadata indertion is
considered as a normal metadata so no check is done), and so does the
message in the AvailabilityZoneFilter. I don't know why you need a
compute node that belongs to 2 different availability-zones. Maybe I'm
wrong but for me it's logical that availability-zones do not share the
same compute nodes. The availability-zones have the role of partition
your compute nodes into zones that are physically separated (in large
term it would require separation of physical servers, networking
equipments, power sources, etc). So that when user deploys 2 VMs in 2
different zones, he knows that these VMs do not fall into a same host and
if some zone falls, the others continue working, thus the client will not
lose all of his VMs. It's smaller than Regions which ensure total
separation at the cost of low-layer connectivity and central management
(e.g. scheduling per region).

See: http://www.linuxjournal.com/content/introduction-openstack

The former purpose of regouping hosts with the same characteristics is
ensured by host-aggregates.

 The problem that I have is that I can still not boot a VM on the
compute node
 when I do not specify the AZ in the command though I have set the
default
 availability zone and the default schedule zone in nova.conf.
 
 I get the error ³ERROR: The requested availability zone is not
available²
 
 What I am  trying to achieve is have two AZ that the user can select
during
 the boot but then have a default AZ which has the HV from both AZ1 AND
AZ2
 so that when the user does not specify any AZ in the boot command I
scatter
 my VM on both the AZ in a balanced way.
 

I do not understand your goal. When you create two availability-zones and
put ALL of your compute nodes into these AZs, then if you don't specifies
the AZ in your request, then AZFilter will automatically accept all hosts.
The defaut weigher (RalWeigher) will then distribute the workload fairely
among these nodes regardless of AZ it belongs to. Maybe it is what you
want?

  With Havana that does not happen as there is a concept of
default_scheduler_zone which is none if not specified and when we specify
one can only specify a since AZ whereas in my case I basically want the 2
AZ that I create both to be considered default zones if nothing is
specified.

 Any pointers.
 
 Thanks,
 Sangeeta
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Chris Friesen

On 03/26/2014 11:17 AM, Khanh-Toan Tran wrote:


I don't know why you need a
compute node that belongs to 2 different availability-zones. Maybe
I'm wrong but for me it's logical that availability-zones do not
share the same compute nodes. The availability-zones have the role
of partition your compute nodes into zones that are physically
separated (in large term it would require separation of physical
servers, networking equipments, power sources, etc). So that when
user deploys 2 VMs in 2 different zones, he knows that these VMs do
not fall into a same host and if some zone falls, the others continue
working, thus the client will not lose all of his VMs.


See Vish's email.

Even under the original meaning of availability zones you could 
realistically have multiple orthogonal availability zones based on 
room, or rack, or network, or dev vs production, or even 
has_ssds and a compute node could reasonably be part of several 
different zones because they're logically in different namespaces.


Then an end-user could boot an instance, specifying networkA, dev, 
and has_ssds and only hosts that are part of all three zones would match.


Even if they're not used for orthogonal purposes, multiple availability 
zones might make sense.  Currently availability zones are the only way 
an end-user has to specify anything about the compute host he wants to 
run on.  So it's not entirely surprising that people might want to 
overload them for purposes other than physical partitioning of machines.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-26 Thread Eugene Nikanorov
Let's discuss it on weekly LBaaS meeting tomorrow.

Thanks,
Eugene.


On Wed, Mar 26, 2014 at 7:03 PM, Susanne Balle sleipnir...@gmail.comwrote:

 Jorge: I agree with you around ensuring different drivers support the
 API contract and the no vendor lock-in.

 All: How do we move this forward? It sounds like we have agreement that
 this is worth investigating.

 How do we move forward with the investigation and how to best architect
 this? Is this a topic for tomorrow's LBaaS weekly meeting? or should I
 schedule a hang-out meeting for us to discuss?

 Susanne




 On Tue, Mar 25, 2014 at 6:16 PM, Jorge Miramontes 
 jorge.miramon...@rackspace.com wrote:

   Hey Susanne,

  I think it makes sense to group drivers by each LB software. For
 example, there would be a driver for HAProxy, one for Citrix's Netscalar,
 one for Riverbed's Stingray, etc. One important aspect about Openstack that
 I don't want us to forget though is that a tenant should be able to move
 between cloud providers at their own will (no vendor lock-in). The API
 contract is what allows this. The challenging aspect is ensuring different
 drivers support the API contract in the same way. What components should
 drivers share is also and interesting conversation to be had.

  Cheers,
 --Jorge

   From: Susanne Balle sleipnir...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Tuesday, March 25, 2014 6:59 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
 managed services

   John, Brandon,

 I agree that we cannot have a multitude of drivers doing the same thing
 or close to because then we end-up in the same situation as we are today
 where we have duplicate effort and technical debt.

  The goal would be here to be able to built a framework around the
 drivers that would allow for resiliency, failover, etc...

  If the differentiators are in higher level APIs then we can have one a
 single driver (in the best case) for each software LB e.g. HA proxy, nginx,
 etc.

  Thoughts?

  Susanne


 On Mon, Mar 24, 2014 at 11:26 PM, John Dewey j...@dewey.ws wrote:

 I have a similar concern.  The underlying driver may support different
 functionality, but the differentiators need exposed through the top level
 API.

  I see the SSL work is well underway, and I am in the process of
 defining L7 scripting requirements.  However, I will definitely need L7
 scripting prior to the API being defined.
 Is this where vendor extensions come into play?  I kinda like the route
 the Ironic guy safe taking with a vendor passthru API.

  John

 On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:

   Creating a separate driver for every new need brings up a concern I
 have had.  If we are to implement a separate driver for every need then the
 permutations are endless and may cause a lot drivers and technical debt.
  If someone wants an ha-haproxy driver then great.  What if they want it to
 be scalable and/or HA, is there supposed to be scalable-ha-haproxy,
 scalable-haproxy, and ha-haproxy drivers?  Then what if instead of doing
 spinning up processes on the host machine we want a nova VM or a container
 to house it?  As you can see the permutations will begin to grow
 exponentially.  I'm not sure there is an easy answer for this.  Maybe I'm
 worrying too much about it because hopefully most cloud operators will use
 the same driver that addresses those basic needs, but worst case scenarios
 we have a ton of drivers that do a lot of similar things but are just
 different enough to warrant a separate driver.
  --
 *From:* Susanne Balle [sleipnir...@gmail.com]
 *Sent:* Monday, March 24, 2014 4:59 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra
 and managed services

   Eugene,

  Thanks for your comments,

  See inline:

  Susanne


  On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov 
 enikano...@mirantis.com wrote:

  Hi Susanne,

  a couple of comments inline:




 We would like to discuss adding the concept of managed services to the
 Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
 proxy. The latter could be a second approach for some of the software
 load-balancers e.g. HA proxy since I am not sure that it makes sense to
 deploy Libra within Devstack on a single VM.



 Currently users would have to deal with HA, resiliency, monitoring and
 managing their load-balancers themselves.  As a service provider we are
 taking a more managed service approach allowing our customers to consider
 the LB as a black box and the service manages the resiliency, HA,
 monitoring, etc. for them.



   As far as I understand these two abstracts, you're talking about
 making LBaaS API more high-level than it is right now.
 I 

[openstack-dev] [all] sample config files should be ignored in git...

2014-03-26 Thread Clint Byrum
This is an issue that affects all of our git repos. If you are using
oslo.config, you will likely also be using the sample config generator.

However, for some reason we are all checking this generated file in.
This makes no sense, as we humans are not editting it, and it often
picks up config files from other things like libraries (keystoneclient
in particular). This has lead to breakage in the gate a few times for
Heat, perhaps for others as well.

I move that we all rm this file from our git trees, and start generating
it as part of the install/dist process (I have no idea how to do
this..). This would require:

- rm sample files and add them to .gitignore in all trees
- Removing check_uptodate.sh from all trees/tox.ini's
- Generating file during dist/install process.

Does anyone disagree?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All]Optional dependencies and requirements.txt

2014-03-26 Thread Ben Nemec
I have submitted a couple of changes to start us down the path to better 
optional dependency support, as discussed below.  There are still some 
issues to be worked out, like how to specify a default for a particular 
project (I punted on this for the oslo.messaging POC and left kombu as a 
hard requirement), but I think this is progress.  Let me know if you 
have any comments.


pbr change to support the nested dependencies: 
https://review.openstack.org/#/c/83149/


oslo.messaging POC demonstrating how this could be done: 
https://review.openstack.org/#/c/83150/


Thanks.

-Ben

On 02/17/2014 05:06 PM, Ben Nemec wrote:

On 2014-02-12 18:22, David Koo wrote:

We could use a separate requirements file for each driver, following
a naming
convention to let installation tools pick up the right file.  For
example,
oslo.messaging might include amqp-requirements.txt,
qpid-requirements.txt,
zmq-requirements.txt, etc.


If we're going to have more than one requirement file then may I propose
something like a requirements.d directory and putting the files in that
directory (and no need for a -requirements suffix)?

requirements.d/
global
amqp
qpid
zmq
...

Somehow seems cleaner.


This makes sense to me as well, especially since I think we're going to
end up with somewhat of a proliferation of these files over time.  I'm
pretty sure I'm about to hit the same issue with different cinder
backends that have different requirements, so it's not just messaging
that's affected.

If nobody objects to this approach, I'll look into supporting the
requirements.d style going forward.

-Ben



--
Koo

On Wed, 12 Feb 2014 16:42:17 -0500
Doug Hellmann doug.hellm...@dreamhost.com wrote:


On Wed, Feb 12, 2014 at 3:58 PM, Ben Nemec openst...@nemebean.com
wrote:

 Hi all,

 This is an issue that has come up recently in tripleo as we try to
 support more varied configurations.  Currently qpid-python is not
 listed in requirements.txt for many of the OpenStack projects, even
 though they support using Qpid as a messaging broker.  This means
 that when we install from source in tripleo we have to dynamically
 add a line to requirements.txt if we want to use Qpid (we pip
 install -r to handle deps). There seems to be disagreement over the
 correct way to handle this, so Joe requested on my proposed Nova
 change that I raise the issue here.

 There's already some discussion on the bug here:
 https://bugs.launchpad.net/heat/+bug/1225191 as well as a separate
 Neutron bug here: https://bugs.launchpad.net/neutron/+bug/1225232

 If there's a better alternative to require all the things I'm
 certainly interested to hear it.  I expect we're going to hit this
 more in the future as we add support for other optional backends
 for services and such.


We could use a separate requirements file for each driver, following a
naming convention to let installation tools pick up the right file.
For example, oslo.messaging might include amqp-requirements.txt,
qpid-requirements.txt, zmq-requirements.txt, etc.

That would complicate the global requirements sync script, but not
terribly so.

Thoughts?

Doug




 Thanks.

 -Ben

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins test logs and their retention period

2014-03-26 Thread Vishvananda Ishaya

On Mar 24, 2014, at 2:09 PM, Joe Gordon joe.gord...@gmail.com wrote:

 
 
 
 On Mon, Mar 24, 2014 at 3:49 AM, Sean Dague s...@dague.net wrote:
 Here is some preliminary views (it currently ignores the ceilometer
 logs, I haven't had a chance to dive in there yet).
 
 It actually looks like a huge part of the issue is olso.messaging, the
 bulk of screen-n-cond is oslo.messaging debug errors. It seems that in
 debug mode oslo.messaging is basically a 100% trace mode, which include
 logging every time a UUID is created and every payload.
 
 I'm not convinced why that's a useful. We don't log every sql statement
 we run (with full payload).
 
 
 Agreed. I turned off oslo.messaging logs [1] and the file sizes in a 
 check-tempest-dsvm-full dropped drastically to [2]. nova-conductor logs 
 dropped way down from 7.3MB to 214K.
 
 [1] https://review.openstack.org/#/c/82255/
 [2] 
 http://logs.openstack.org/55/82255/1/check/check-tempest-dsvm-full/88d1e36/logs/?C=S;O=D
 
 The recent integration of oslo.messaging would also explain the new
 growth of logs.
 
 Other issues include other oslo utils that have really verbose debug
 modes. Like lockutils emitting 4 DEBUG messages for every lock acquired.
 
 Part of the challenge is turning off DEBUG is currently embedded in code
 in oslo log, which makes it kind of awkward to set sane log levels for
 included libraries because it requires an oslo round trip with code to
 all the projects to do it.
 
 ++
 
 One possible solution is to start using the  log_config_append and load the 
 config from a logging.conf file. But we don't even copy over the sample file 
 in devstack. So for icehouse we may want to do a cherry-pick from 
 oslo-incubator to disable oslo.messaging

Can’t we just specify a reasonable default_log_levels in *.conf in devstack? 
That would cut down the log chatter for integration tests, and wouldn’t be a 
breaking change.

Vish

  
 
 -Sean
 
 On 03/21/2014 07:23 PM, Clark Boylan wrote:
  Hello everyone,
 
  Back at the Portland summit the Infra team committed to archiving six months
  of test logs for Openstack. Since then we have managed to do just that.
  However, more recently we have seen the growth rate on those logs continue
  to grow beyond what is a currently sustainable level.
 
  For reasons, we currently store logs on a filesystem backed by cinder
  volumes. Rackspace limits the size and number of volumes attached to a
  single host meaning the upper bound on the log archive filesystem is ~12TB
  and we are almost there. You can see real numbers and pretty graphs on our
  cacti server [0].
 
  Long term we are trying to move to putting all of the logs in swift, but it
  turns out there are some use case issues we need to sort out around that
  before we can do so (but this is being worked on so should happen). Until
  that day arrives we need to work on logging more smartly, and if we can't do
  that we will have to reduce the log retention period.
 
  So what can you do? Well it appears that our log files may need a diet. I
  have listed the worst offenders below (after a small sampling, there may be
  more) and it would be great if we could go through those with a comb and
  figure out if we are logging actually useful data. The great thing about
  doing this is it will make lives better for deployers of Openstack too.
 
  Some initial checking indicates a lot of this noise may be related to
  ceilometer. It looks like it is logging AMQP stuff frequently and inflating
  the logs of individual services as it polls them.
 
  Offending files from tempest tests:
  screen-n-cond.txt.gz 7.3M
  screen-ceilometer-collector.txt.gz 6.0M
  screen-n-api.txt.gz 3.7M
  screen-n-cpu.txt.gz 3.6M
  tempest.txt.gz 2.7M
  screen-ceilometer-anotification.txt.gz 1.9M
  subunit_log.txt.gz 1.5M
  screen-g-api.txt.gz 1.4M
  screen-ceilometer-acentral.txt.gz 1.4M
  screen-n-net.txt.gz 1.4M
  from: 
  http://logs.openstack.org/52/81252/2/gate/gate-tempest-dsvm-full/488bc4e/logs/?C=S;O=D
 
  Unittest offenders:
  Nova subunit_log.txt.gz 14M
  Neutron subunit_log.txt.gz 7.8M
  Keystone subunit_log.txt.gz 4.8M
 
  Note all of the above files are compressed with gzip -9 and the filesizes
  above reflect compressed file sizes.
 
  Debug logs are important to you guys when dealing with Jenkins results. We
  want your feedback on how we can make this better for everyone.
 
  [0] 
  http://cacti.openstack.org/cacti/graph.php?action=viewlocal_graph_id=717rra_id=all
 
  Thank you,
  Clark Boylan
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-26 Thread Russell Bryant
On 03/26/2014 02:10 PM, Clint Byrum wrote:
 This is an issue that affects all of our git repos. If you are using
 oslo.config, you will likely also be using the sample config generator.
 
 However, for some reason we are all checking this generated file in.
 This makes no sense, as we humans are not editting it, and it often
 picks up config files from other things like libraries (keystoneclient
 in particular). This has lead to breakage in the gate a few times for
 Heat, perhaps for others as well.
 
 I move that we all rm this file from our git trees, and start generating
 it as part of the install/dist process (I have no idea how to do
 this..). This would require:
 
 - rm sample files and add them to .gitignore in all trees
 - Removing check_uptodate.sh from all trees/tox.ini's
 - Generating file during dist/install process.
 
 Does anyone disagree?

This has been done in Nova, except we don't have it generated during
install.  We just have instructions and a tox target that will do it if
you choose to.

https://git.openstack.org/cgit/openstack/nova/tree/etc/nova/README-nova.conf.txt

Related, adding instructions to generate without tox:
https://review.openstack.org/#/c/82533/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-26 Thread Eric Harney
On 03/25/2014 11:07 AM, Shlomi Sasson wrote:

 I am not sure what will be the right approach to handle this, I already have 
 the code, should I open a bug or blueprint to track this issue?
 
 Best Regards,
 Shlomi
 


A blueprint around this would be appreciated.  I have had similar
thoughts around this myself, that these should be options for the LVM
iSCSI driver rather than different drivers.

These options also mirror how we can choose between tgt/iet/lio in the
LVM driver today.  I've been assuming that RDMA support will be added to
the LIO driver there at some point, and this seems like a nice way to
enable that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-26 Thread Keith Bray


On 3/25/14 11:55 AM, Ruslan Kamaldinov rkamaldi...@mirantis.com wrote:

* Murano DSL will focus on:
  a. UI rendering


One of the primary reasons I am opposed to using a different DSL/project
to accomplish this is that the person authoring the HOT template is
usually the system architect, and this is the same person who has the
technical knowledge to know what technologies you can swap in/out and
still have that system/component work, so they are also the person who
can/should define the rules of what component building blocks can and
can't work together.  There has been an overwhelmingly strong preference
from the system architects/DevOps/ApplicationExperts I [1] have talked to
for the ability to have control over those rules directly within the HOT
file or immediately along-side the HOT file but feed the whole set of
files to a single API endpoint.  I'm not advocating that this extra stuff
be part of Heat Engine (I understand the desire to keep the orchestration
engine clean)... But from a barrier to adoption point-of-view, the extra
effort for the HOT author to learn another DSL and use yet another system
(or even have to write multiple files) should not be underestimated.
These people are not OpenStack developers, they are DevOps folks and
Application Experts.  This is why the Htr[2] project was proposed and
threads were started to add extra data to HOT template that Heat engine
could essentially ignore, but would make defining UI rendering and
component connectivity easy for the HOT author.

I'm all for contributions to OpenStack, so I encourage the Murano team to
continue doing its thing if they find it adds value to themselves or
others. However, I'd like to see the Orchestration program support the
surrounding things the users of the Heat engine want/need from their cloud
system instead of having those needs met by separate projects seeking
incubation. There are technical ways to keep the core engine clean while
having the Orchestration Program API Service move up the stack in terms of
cloud user experience.

  b. HOT generation
  c. Setup other services (like put Mistral tasks to Mistral and bind
 them with events)

Speaking about new DSL for Murano. We're speaking about Application
Lifecycle
Management. There are a lot of existing tools - Heat/HOT, Python, etc,
but none
of them was designed with ALM in mind as a goal.

Solum[3] is specifically designed for ALM and purpose built for
OpenStack... It has declared that it will generate HOT templates and setup
other services, including putting together or executing supplied workflow
definition (using Mistral if applicable).  Like Murano, Solum is also not
an OpenStack incubated project, but it has been designed with community
collaboration (based on shared pain across multiple contributors) with the
ALM goal in mind from the very beginning.

-Keith


[1] I regularly speak with DevOps, Application Specialists, and cloud
customers, specifically about Orchestration and Heat.. HOT is somewhat
simple enough for the most technical of them (DevOps  App Specialists) to
grasp and have interest in adopting, but their is strong push back from
the folks I talk to about having to learn one more thing... Since Heat
adopters are exactly the same people who have the knowledge to define the
overall system capabilities including component connectivity and how UI
should be rendered, I'd like to keep it simple for them. The more we can
do to have the Orchestration service look/feel like one thing (even if
it's Engine + Other things under the hood), or reuse other OpenStack core
components (e.g. Glance) the better for adoption.
[2] https://wiki.openstack.org/wiki/Heat/htr
[3] http://solum.io



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] sample config files should be ignored in git...

2014-03-26 Thread Kurt Griffiths
Team, what do you think about doing this for Marconi? It looks like we
indeed have a sample checked in:

https://review.openstack.org/#/c/83006/1/etc/marconi.conf.sample


Personally, I think we should keep the sample until generate_sample.sh
works on OS X (we could even volunteer to fix it); otherwise, people with
MBPs will be in a bit of a bind.

---
Kurt G. | @kgriffs

On 3/26/14, 1:15 PM, Russell Bryant rbry...@redhat.com wrote:

On 03/26/2014 02:10 PM, Clint Byrum wrote:
 This is an issue that affects all of our git repos. If you are using
 oslo.config, you will likely also be using the sample config generator.
 
 However, for some reason we are all checking this generated file in.
 This makes no sense, as we humans are not editting it, and it often
 picks up config files from other things like libraries (keystoneclient
 in particular). This has lead to breakage in the gate a few times for
 Heat, perhaps for others as well.
 
 I move that we all rm this file from our git trees, and start generating
 it as part of the install/dist process (I have no idea how to do
 this..). This would require:
 
 - rm sample files and add them to .gitignore in all trees
 - Removing check_uptodate.sh from all trees/tox.ini's
 - Generating file during dist/install process.
 
 Does anyone disagree?

This has been done in Nova, except we don't have it generated during
install.  We just have instructions and a tox target that will do it if
you choose to.

https://git.openstack.org/cgit/openstack/nova/tree/etc/nova/README-nova.co
nf.txt

Related, adding instructions to generate without tox:
https://review.openstack.org/#/c/82533/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-26 Thread Zane Bitter

On 26/03/14 14:10, Clint Byrum wrote:

This is an issue that affects all of our git repos. If you are using
oslo.config, you will likely also be using the sample config generator.

However, for some reason we are all checking this generated file in.
This makes no sense, as we humans are not editting it, and it often
picks up config files from other things like libraries (keystoneclient
in particular). This has lead to breakage in the gate a few times for
Heat, perhaps for others as well.


Just to put the other side of this... the latest change to oslo.config 
has produced a *completely broken* config file in Heat (due to the fix 
for bug #1262148 landing in oslo.config - see  bug #1288586 for gory 
details).


The fact that we have to make a change to a file in the repository that 
goes through code review means that we are able to see that. If it were 
silently generated with no human intervention, we would be shipping 
garbage right now.



That said, the fact that config files have to match to pass the gate, as 
they do currently, also makes it very hard to actually fix the bug. So 
I'm not sure what the right answer is here.


cheers,
Zane.


I move that we all rm this file from our git trees, and start generating
it as part of the install/dist process (I have no idea how to do
this..). This would require:

- rm sample files and add them to .gitignore in all trees
- Removing check_uptodate.sh from all trees/tox.ini's
- Generating file during dist/install process.

Does anyone disagree?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Jay Pipes
On Wed, 2014-03-26 at 09:47 -0700, Vishvananda Ishaya wrote:
 Personally I view this as a bug. There is no reason why we shouldn’t
 support arbitrary grouping of zones. I know there is at least one
 problem with zones that overlap regarding displaying them properly:
 
 https://bugs.launchpad.net/nova/+bug/1277230
 
 There is probably a related issue that is causing the error you see
 below. IMO both of these should be fixed. I also think adding a
 compute node to two different aggregates with azs should be allowed.
 
 It also might be nice to support specifying multiple zones in the
 launch command in these models. This would allow you to limit booting
 to an intersection of two overlapping zones.
 
 A few examples where these ideas would be useful:
 
 1. You have 3 racks of servers and half of the nodes from each rack
 plugged into a different switch. You want to be able to specify to
 spread across racks or switches via an AZ. In this model you could
 have a zone for each switch and a zone for each rack.
 
 2. A single cloud has 5 racks in one room in the datacenter and 5
 racks in a second room. You’d like to give control to the user to
 choose the room or choose the rack. In this model you would have one
 zone for each room, and smaller zones for each rack.
 
 3. You have a small 3 rack cloud and would like to ensure that your
 production workloads don’t run on the same machines as your dev
 workloads, but you also want to use zones spread workloads across the
 three racks. Similarly to 1., you could split your racks in half via
 dev and prod zones. Each one of these zones would overlap with a rack
 zone.
 
 You can achieve similar results in these situations by making small
 zones (switch1-rack1 switch1-rack2 switch1-rack3 switch2-rack1
 switch2-rack2 switch2-rack3) but that removes the ability to decide to
 launch something with less granularity. I.e. you can’t just specify
 ‘switch1' or ‘rack1' or ‘anywhere’
 
 I’d like to see all of the following work
 nova boot … (boot anywhere)
 nova boot —availability-zone switch1 … (boot it switch1 zone)
 nova boot —availability-zone rack1 … (boot in rack1 zone)
 nova boot —availability-zone switch1,rack1 … (boot

Personally, I feel it is a mistake to continue to use the Amazon concept
of an availability zone in OpenStack, as it brings with it the
connotation from AWS EC2 that each zone is an independent failure
domain. This characteristic of EC2 availability zones is not enforced in
OpenStack Nova or Cinder, and therefore creates a false expectation for
Nova users.

In addition to the above problem with incongruent expectations, the
other problem with Nova's use of the EC2 availability zone concept is
that availability zones are not hierarchical -- due to the fact that EC2
AZs are independent failure domains. Not having the possibility of
structuring AZs hierarchically limits the ways in which Nova may be
deployed -- just see the cells API for the manifestation of this
problem.

I would love it if the next version of the Nova and Cinder APIs would
drop the concept of an EC2 availability zone and introduce the concept
of a generic region structure that can be infinitely hierarchical in
nature. This would enable all of Vish's nova boot commands above in an
even simpler fashion. For example:

Assume a simple region hierarchy like so:

  regionA
  /  \
 regionBregionC

# User wants to boot in region B
nova boot --region regionB
# User wants to boot in either region B or region C
nova boot --region regionA

I think of the EC2 availability zone concept in the Nova and Cinder APIs
as just another example of implementation leaking out of the API. The
fact that EC2 availability zones are implemented as independent failure
domains and thus have a non-hierarchical structure has caused the Nova
API to look and feel a certain way that locks the API into the
implementation of a non-OpenStack product.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Sangeeta Singh
Yes, Vish description describes the uses cases and the need for multiple
overlapping 
availability zones nicely.

If multiple availability zone can be specified in the launch command that
will allow
End user to select hosts that satisfy all there constraints.

Thanks,
Sangeeta

On 3/26/14, 11:00 AM, Chris Friesen chris.frie...@windriver.com wrote:

On 03/26/2014 11:17 AM, Khanh-Toan Tran wrote:

 I don't know why you need a
 compute node that belongs to 2 different availability-zones. Maybe
 I'm wrong but for me it's logical that availability-zones do not
 share the same compute nodes. The availability-zones have the role
 of partition your compute nodes into zones that are physically
 separated (in large term it would require separation of physical
 servers, networking equipments, power sources, etc). So that when
 user deploys 2 VMs in 2 different zones, he knows that these VMs do
 not fall into a same host and if some zone falls, the others continue
 working, thus the client will not lose all of his VMs.

See Vish's email.

Even under the original meaning of availability zones you could
realistically have multiple orthogonal availability zones based on
room, or rack, or network, or dev vs production, or even
has_ssds and a compute node could reasonably be part of several
different zones because they're logically in different namespaces.

Then an end-user could boot an instance, specifying networkA, dev,
and has_ssds and only hosts that are part of all three zones would
match.

Even if they're not used for orthogonal purposes, multiple availability
zones might make sense.  Currently availability zones are the only way
an end-user has to specify anything about the compute host he wants to
run on.  So it's not entirely surprising that people might want to
overload them for purposes other than physical partitioning of machines.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-26 Thread Tim Bell

My assumption on the depreciation messages is that this is targeted at non-core 
OpenStack applications.

OpenStack developer pressure should be established within the projects, not by 
overwhelming production clouds with logs that something is depreciated. 

Equally, asking locally developed internal application developers to migrate 
when the official projects have not will be a tough meeting.

Tim

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: 26 March 2014 18:52
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API
 
 
 Logging on half of the API calls is obviously bad, but logging a single time 
 seems reasonable.  Deployers need to know it's deprecated,
 too.  If an API is going away, they have to make plays to test/deploy the new 
 thing.
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins test logs and their retention period

2014-03-26 Thread Joe Gordon
On Wed, Mar 26, 2014 at 11:15 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Mar 24, 2014, at 2:09 PM, Joe Gordon joe.gord...@gmail.com wrote:




 On Mon, Mar 24, 2014 at 3:49 AM, Sean Dague s...@dague.net wrote:

 Here is some preliminary views (it currently ignores the ceilometer
 logs, I haven't had a chance to dive in there yet).

 It actually looks like a huge part of the issue is olso.messaging, the
 bulk of screen-n-cond is oslo.messaging debug errors. It seems that in
 debug mode oslo.messaging is basically a 100% trace mode, which include
 logging every time a UUID is created and every payload.

 I'm not convinced why that's a useful. We don't log every sql statement
 we run (with full payload).


 Agreed. I turned off oslo.messaging logs [1] and the file sizes in a
 check-tempest-dsvm-full dropped drastically to [2]. nova-conductor logs
 dropped way down from 7.3MB to 214K.

 [1] https://review.openstack.org/#/c/82255/
 [2]
 http://logs.openstack.org/55/82255/1/check/check-tempest-dsvm-full/88d1e36/logs/?C=S;O=D

 The recent integration of oslo.messaging would also explain the new
 growth of logs.

 Other issues include other oslo utils that have really verbose debug
 modes. Like lockutils emitting 4 DEBUG messages for every lock acquired.

 Part of the challenge is turning off DEBUG is currently embedded in code
 in oslo log, which makes it kind of awkward to set sane log levels for
 included libraries because it requires an oslo round trip with code to
 all the projects to do it.


 ++

 One possible solution is to start using the  log_config_append and load
 the config from a logging.conf file. But we don't even copy over the sample
 file in devstack. So for icehouse we may want to do a cherry-pick from
 oslo-incubator to disable oslo.messaging


 Can't we just specify a reasonable default_log_levels in *.conf in
 devstack? That would cut down the log chatter for integration tests, and
 wouldn't be a breaking change.


If we are having problems in gate with verbose and useless logs, others
will to ... so I don't think we should sidestep the problem via devstack
otherwise every deployer will have to do the same. This fits in with the
*sane defaults* mantra.



 Vish




 -Sean

 On 03/21/2014 07:23 PM, Clark Boylan wrote:
  Hello everyone,
 
  Back at the Portland summit the Infra team committed to archiving six
 months
  of test logs for Openstack. Since then we have managed to do just that.
  However, more recently we have seen the growth rate on those logs
 continue
  to grow beyond what is a currently sustainable level.
 
  For reasons, we currently store logs on a filesystem backed by cinder
  volumes. Rackspace limits the size and number of volumes attached to a
  single host meaning the upper bound on the log archive filesystem is
 ~12TB
  and we are almost there. You can see real numbers and pretty graphs on
 our
  cacti server [0].
 
  Long term we are trying to move to putting all of the logs in swift,
 but it
  turns out there are some use case issues we need to sort out around that
  before we can do so (but this is being worked on so should happen).
 Until
  that day arrives we need to work on logging more smartly, and if we
 can't do
  that we will have to reduce the log retention period.
 
  So what can you do? Well it appears that our log files may need a diet.
 I
  have listed the worst offenders below (after a small sampling, there
 may be
  more) and it would be great if we could go through those with a comb and
  figure out if we are logging actually useful data. The great thing about
  doing this is it will make lives better for deployers of Openstack too.
 
  Some initial checking indicates a lot of this noise may be related to
  ceilometer. It looks like it is logging AMQP stuff frequently and
 inflating
  the logs of individual services as it polls them.
 
  Offending files from tempest tests:
  screen-n-cond.txt.gz 7.3M
  screen-ceilometer-collector.txt.gz 6.0M
  screen-n-api.txt.gz 3.7M
  screen-n-cpu.txt.gz 3.6M
  tempest.txt.gz 2.7M
  screen-ceilometer-anotification.txt.gz 1.9M
  subunit_log.txt.gz 1.5M
  screen-g-api.txt.gz 1.4M
  screen-ceilometer-acentral.txt.gz 1.4M
  screen-n-net.txt.gz 1.4M
  from:
 http://logs.openstack.org/52/81252/2/gate/gate-tempest-dsvm-full/488bc4e/logs/?C=S;O=D
 
  Unittest offenders:
  Nova subunit_log.txt.gz 14M
  Neutron subunit_log.txt.gz 7.8M
  Keystone subunit_log.txt.gz 4.8M
 
  Note all of the above files are compressed with gzip -9 and the
 filesizes
  above reflect compressed file sizes.
 
  Debug logs are important to you guys when dealing with Jenkins results.
 We
  want your feedback on how we can make this better for everyone.
 
  [0]
 http://cacti.openstack.org/cacti/graph.php?action=viewlocal_graph_id=717rra_id=all
 
  Thank you,
  Clark Boylan
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  

Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-26 Thread Anne Gentle
On Wed, Mar 26, 2014 at 1:10 PM, Clint Byrum cl...@fewbar.com wrote:

 This is an issue that affects all of our git repos. If you are using
 oslo.config, you will likely also be using the sample config generator.

 However, for some reason we are all checking this generated file in.
 This makes no sense, as we humans are not editting it, and it often
 picks up config files from other things like libraries (keystoneclient
 in particular). This has lead to breakage in the gate a few times for
 Heat, perhaps for others as well.

 I move that we all rm this file from our git trees, and start generating
 it as part of the install/dist process (I have no idea how to do
 this..). This would require:

 - rm sample files and add them to .gitignore in all trees
 - Removing check_uptodate.sh from all trees/tox.ini's
 - Generating file during dist/install process.

 Does anyone disagree?


The documentation currently points to the latest copy of the generated
files when they're available. I'd like to continue having those generated
and looked at by humans in reviews.

I think if you asked non-devs, such as deployers, you'd find wider uses.
Can you poll another group in addition to this mailing list?

Thanks,
Anne



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins test logs and their retention period

2014-03-26 Thread Joe Gordon
On Wed, Mar 26, 2014 at 9:51 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Tue, Mar 25, 2014 at 5:34 PM, Brant Knudson b...@acm.org wrote:




 On Mon, Mar 24, 2014 at 5:49 AM, Sean Dague s...@dague.net wrote:

 ...

 Part of the challenge is turning off DEBUG is currently embedded in code
 in oslo log, which makes it kind of awkward to set sane log levels for
 included libraries because it requires an oslo round trip with code to
 all the projects to do it.


 Here's how it's done in Keystone:
 https://review.openstack.org/#/c/62068/10/keystone/config.py

 It's definitely awkward.


 https://bugs.launchpad.net/oslo/+bug/1297950


Currently when you enable debug logs in openstack, the root logger is set
to debug and then we have to go and blacklist specific modules that we
don't want to run on debug. What about instead adding an option to just set
the openstack component at hand to debug log level and not the root logger?
That way we won't have to keep maintaining a blacklist of modules that
generate too many debug logs.




 Doug





 - Brant


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-26 Thread Maglana, Mark
This is really interesting discussion but was thrown off by the different use 
of ‘functional testing.’ I decided to reconcile it with my understanding and 
ended up with this two-pager. Sharing it in case it helps:

https://docs.google.com/document/d/1ILxfoJlov9lBfuuZtvwW_7bmlGaYYw0UsE1R9lo6XwQ/edit


Regards,

Mark Maglana
QA/Release Engineer
Nexus IS, Inc.
Single Number Reach: 424-225-1309






On Mar 25, 2014, at 4:55 AM, Malini Kamalambal 
malini.kamalam...@rackspace.com wrote:

 
 
 
 We are talking about different levels of testing,
 
 1. Unit tests - which everybody agrees should be in the individual project
 itself
 2. System Tests - 'System' referring to ( limited to), all the components
 that make up the project. These are also the functional tests for the
 project.
 3. Integration Tests - This is to verify that the OS components interact
 well and don't break other components -Keystone being the most obvious
 example. This is where I see getting the maximum mileage out of Tempest.
 
 Its not easy to detect what the integration points with other projects are, 
 any project can use any stable API from any other project. Because of this 
 all OpenStack APIs should fit into this category. 
 
 Any project can use any stable API –but that does not make all API tests , 
 Integration Tests.
 A test becomes Integration test when it has two or more projects interacting 
 in the test.
 
 Individual projects should be held accountable to make sure that their API's 
 work – no matter who consumes them.
 We should be able to treat the project as a complete system, make API calls 
 and validate that the response matches the API definitions.
 Identifying issues earlier in the pipeline reduces the Total Cost of Quality.
 
 I agree that Integration Testing is hard. It is complicated because it 
 requires knowledge of how systems interact with each other – and knowledge 
 comes from a lot of time spent on analysis.
 It requires people with project expertise to talk to each other  identify 
 possible test cases.
 openstack-qa is the ideal forum to do this.
 Holding projects responsible for their functionality will help the QA team 
 focus on complicated integration tests.
 
 Having a second group writing tests to Nova's public APIs has been really 
 helpful in keeping us honest as well.
 
 Sounds like a testimonial for more project level testing :)
 
 
 I see value in projects taking ownership of the System Tests - because if
 the project is not 'functionally ready', it is not ready to integrate with
 other components of Openstack.
 
 What do you mean by not ready?
 
 'Functionally Ready' - The units that make up a project can work together as 
 a system,  all API's have been exercised with positive  negative test cases 
 by treating the project as a complete system.
 There are no known critical bugs. The point here being identify as many 
 issues as possible, earlier in the game.
  
 But for this approach to be successful, projects should have diversity in
 the team composition - we need more testers who focus on creating these
 tests.
 This will keep the teams honest in their quality standards.
 
 As long as individual projects cannot guarantee functional test coverage,
 we will need more tests in Tempest.
 But that will shift focus away from Integration Testing, which can be done
 ONLY in Tempest.
 
 Regardless of whatever we end up deciding, it will be good to have these
 discussions sooner than later.
 This will help at least the new projects to move in the right direction.
 
 -Malini
 
 
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-26 Thread Devananda van der Veen
I haven't gotten to my email back log yet, but want to point out that I
agree with everything Robert just said. I also raised these concerns on the
original ceilometer BP, which is what gave rise to all the work in ironic
that Haomeng has been doing (on the linked ironic BP) to expose these
metrics for ceilometer to consume.

Typing quickly on a mobile,
Deva
On Mar 26, 2014 11:34 AM, Robert Collins robe...@robertcollins.net
wrote:

 On 27 March 2014 06:28, Eoghan Glynn egl...@redhat.com wrote:
 
 
  On 3/25/2014 1:50 PM, Matt Wagner wrote:
   This would argue to me that the easiest thing for Ceilometer might be
   to query us for IPMI stats, if the credential store is pluggable.
   Fetch these bare metal statistics doesn't seem too off-course for
   Ironic to me. The alternative is that Ceilometer and Ironic would both
   have to be configured for the same pluggable credential store.
 
  There is already a blueprint with a proposed patch here for Ironic to do
  the querying:
  https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
 
  Yes, so I guess there are two fundamentally different approaches that
  could be taken here:
 
  1. ironic controls the cadence of IPMI polling, emitting notifications
 at whatever frequency it decides, carrying whatever level of
 detail/formatting it deems appropriate, which are then consumed by
 ceilometer which massages these provided data into usable samples
 
  2. ceilometer acquires the IPMI credentials either via ironic or
 directly from keystone/barbican, before calling out over IPMI at
 whatever cadence it wants and transforming these raw data into
 usable samples
 
  IIUC approach #1 is envisaged by the ironic BP[1].
 
  The advantage of approach #2 OTOH is that ceilometer is in the driving
  seat as far as cadence is concerned, and the model is far more
  consistent with how we currently acquire data from the hypervisor layer
  and SNMP daemons.

 The downsides of #2 are:
  - more machines require access to IPMI on the servers (if a given
 ceilometer is part of the deployed cloud, not part of the minimal
 deployment infrastructure). This sets of security red flags in some
 organisations.
  - multiple machines (ceilometer *and* Ironic) talking to the same
 IPMI device. IPMI has a limit on sessions, and in fact the controllers
 are notoriously buggy - having multiple machines talking to one IPMI
 device is a great way to exceed session limits and cause lockups.

 These seem fundamental showstoppers to me.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-26 Thread James E. Blair
Salvatore Orlando sorla...@nicira.com writes:

 On another note, we noticed that the duplicated jobs currently executed for
 redundancy in neutron actually seem to point all to the same build id.
 I'm not sure then if we're actually executing each job twice or just
 duplicating lines in the jenkins report.

Thanks for catching that, and I'm sorry that didn't work right.  Zuul is
in fact running the jobs twice, but it is only looking at one of them
when sending reports and (more importantly) decided whether the change
has succeeded or failed.  Fixing this is possible, of course, but turns
out to be a rather complicated change.  Since we don't make heavy use of
this feature, I lean toward simply instantiating multiple instances of
identically configured jobs and invoking them (eg neutron-pg-1,
neutron-pg-2).

Matthew Treinish has already worked up a patch to do that, and I've
written a patch to revert the incomplete feature from Zuul.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

2014-03-26 Thread Sean Dague
Where is the RC bug tracking this?

If it's that bad, we really need to be explicit about this with a
critical bug at this stage of the release.

-Sean

On 03/26/2014 01:14 PM, Solly Ross wrote:
 Code which breaks:
 
 Glance's mutliprocessing tests will break (the reason we should limit it now).
 For the future, people attempting to use psutil will have no clear version 
 target
 (Either they use 1.x and break with the people who install the latest version 
 from pip,
 of they use 2.0.0 and break with everything that doesn't use the latest 
 version).
 
 psutil's API is extremely unstable -- it has undergone major revisions going 
 from 0.x to 1.x, and now
 1.x to 2.0.0.  Limiting psutil explicitly to a single major version (it was 
 more or less implicitly limited
 before, since there was no version major version above 1) ensures that the 
 requirements.txt file actually
 indicates what is necessary to use OpenStack.
 
 The alternative option would be to update the glance tests, but my concern is 
 that 2.0.0 is not available
 from the package managers of most distros yet.
 
 Best Regards,
 Solly Ross
 
 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 26, 2014 10:39:41 AM
 Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0
 
 On 03/26/2014 10:30 AM, Solly Ross wrote:
 Hi,
 I currently have a patch up for review 
 (https://review.openstack.org/#/c/81373/) to limit psutil be 2.0.0.
 2.0.0 just came out a couple weeks ago, and breaks the API in a major way.  
 Until we can port our code to the
 latest version, I suggest we limit the version of psutil to 1.x (currently 
 there's a lower bound in the 1.x
 range, just not an upper bound).
 
 Which code will be broken by this if it's not done? Is there an RC bug
 tracking it?
 
   -Sean
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

2014-03-26 Thread Solly Ross
What bug tracker should I file under?  I tried filing one under the openstack 
common infrastructure tracker,
but was told that it wasn't the correct place to file such a bug.

Best Regards,
Solly Ross

- Original Message -
From: Sean Dague s...@dague.net
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, March 26, 2014 3:28:32 PM
Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

Where is the RC bug tracking this?

If it's that bad, we really need to be explicit about this with a
critical bug at this stage of the release.

-Sean

On 03/26/2014 01:14 PM, Solly Ross wrote:
 Code which breaks:
 
 Glance's mutliprocessing tests will break (the reason we should limit it now).
 For the future, people attempting to use psutil will have no clear version 
 target
 (Either they use 1.x and break with the people who install the latest version 
 from pip,
 of they use 2.0.0 and break with everything that doesn't use the latest 
 version).
 
 psutil's API is extremely unstable -- it has undergone major revisions going 
 from 0.x to 1.x, and now
 1.x to 2.0.0.  Limiting psutil explicitly to a single major version (it was 
 more or less implicitly limited
 before, since there was no version major version above 1) ensures that the 
 requirements.txt file actually
 indicates what is necessary to use OpenStack.
 
 The alternative option would be to update the glance tests, but my concern is 
 that 2.0.0 is not available
 from the package managers of most distros yet.
 
 Best Regards,
 Solly Ross
 
 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 26, 2014 10:39:41 AM
 Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0
 
 On 03/26/2014 10:30 AM, Solly Ross wrote:
 Hi,
 I currently have a patch up for review 
 (https://review.openstack.org/#/c/81373/) to limit psutil be 2.0.0.
 2.0.0 just came out a couple weeks ago, and breaks the API in a major way.  
 Until we can port our code to the
 latest version, I suggest we limit the version of psutil to 1.x (currently 
 there's a lower bound in the 1.x
 range, just not an upper bound).
 
 Which code will be broken by this if it's not done? Is there an RC bug
 tracking it?
 
   -Sean
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly LBaaS meeting

2014-03-26 Thread Eugene Nikanorov
Hi folks,

Lets keep our regular meetings. Th next one on Thursday, 27, at 14-00 UTC.

The agenda for the meeting:
1) Object model discussion update
2) Requirements  glossary QA
3) Open discussion


Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed services

2014-03-26 Thread Brandon Logan
Eugene,
I assume the object model discussion will still continue.  Many were of the 
opinion the model with the load balancer is a good one but you stated that 
others that were not present at those meetings did not have that same opinion, 
such as Mark Mcclain.  Mark hasn't been in those meetings to say exactly why he 
opposed.  Is there anyway we can get him and others that object to that 
proposal in the meeting, or at least get in a summary of those reasons?

Thanks,
Brandon

From: Eugene Nikanorov [enikano...@mirantis.com]
Sent: Wednesday, March 26, 2014 12:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed 
services

Let's discuss it on weekly LBaaS meeting tomorrow.

Thanks,
Eugene.


On Wed, Mar 26, 2014 at 7:03 PM, Susanne Balle 
sleipnir...@gmail.commailto:sleipnir...@gmail.com wrote:
Jorge: I agree with you around ensuring different drivers support the API 
contract and the no vendor lock-in.

All: How do we move this forward? It sounds like we have agreement that this is 
worth investigating.

How do we move forward with the investigation and how to best architect this? 
Is this a topic for tomorrow's LBaaS weekly meeting? or should I schedule a 
hang-out meeting for us to discuss?

Susanne




On Tue, Mar 25, 2014 at 6:16 PM, Jorge Miramontes 
jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote:
Hey Susanne,

I think it makes sense to group drivers by each LB software. For example, there 
would be a driver for HAProxy, one for Citrix's Netscalar, one for Riverbed's 
Stingray, etc. One important aspect about Openstack that I don't want us to 
forget though is that a tenant should be able to move between cloud providers 
at their own will (no vendor lock-in). The API contract is what allows this. 
The challenging aspect is ensuring different drivers support the API contract 
in the same way. What components should drivers share is also and interesting 
conversation to be had.

Cheers,
--Jorge

From: Susanne Balle sleipnir...@gmail.commailto:sleipnir...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, March 25, 2014 6:59 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed 
services

John, Brandon,

I agree that we cannot have a multitude of drivers doing the same thing or 
close to because then we end-up in the same situation as we are today where we 
have duplicate effort and technical debt.

The goal would be here to be able to built a framework around the drivers that 
would allow for resiliency, failover, etc...

If the differentiators are in higher level APIs then we can have one a single 
driver (in the best case) for each software LB e.g. HA proxy, nginx, etc.

Thoughts?

Susanne


On Mon, Mar 24, 2014 at 11:26 PM, John Dewey 
j...@dewey.wsmailto:j...@dewey.ws wrote:
I have a similar concern.  The underlying driver may support different 
functionality, but the differentiators need exposed through the top level API.

I see the SSL work is well underway, and I am in the process of defining L7 
scripting requirements.  However, I will definitely need L7 scripting prior to 
the API being defined.
Is this where vendor extensions come into play?  I kinda like the route the 
Ironic guy safe taking with a “vendor passthru” API.

John

On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:

Creating a separate driver for every new need brings up a concern I have had.  
If we are to implement a separate driver for every need then the permutations 
are endless and may cause a lot drivers and technical debt.  If someone wants 
an ha-haproxy driver then great.  What if they want it to be scalable and/or 
HA, is there supposed to be scalable-ha-haproxy, scalable-haproxy, and 
ha-haproxy drivers?  Then what if instead of doing spinning up processes on the 
host machine we want a nova VM or a container to house it?  As you can see the 
permutations will begin to grow exponentially.  I'm not sure there is an easy 
answer for this.  Maybe I'm worrying too much about it because hopefully most 
cloud operators will use the same driver that addresses those basic needs, but 
worst case scenarios we have a ton of drivers that do a lot of similar things 
but are just different enough to warrant a separate driver.

From: Susanne Balle [sleipnir...@gmail.commailto:sleipnir...@gmail.com]
Sent: Monday, March 24, 2014 4:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and managed 
services

Eugene,

Thanks for your comments,

See inline:

Susanne


On Mon, 

Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

2014-03-26 Thread Sean Dague
Which ever project is going to explode under this combination.

-Sean

On 03/26/2014 03:32 PM, Solly Ross wrote:
 What bug tracker should I file under?  I tried filing one under the openstack 
 common infrastructure tracker,
 but was told that it wasn't the correct place to file such a bug.
 
 Best Regards,
 Solly Ross
 
 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 26, 2014 3:28:32 PM
 Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0
 
 Where is the RC bug tracking this?
 
 If it's that bad, we really need to be explicit about this with a
 critical bug at this stage of the release.
 
   -Sean
 
 On 03/26/2014 01:14 PM, Solly Ross wrote:
 Code which breaks:

 Glance's mutliprocessing tests will break (the reason we should limit it 
 now).
 For the future, people attempting to use psutil will have no clear version 
 target
 (Either they use 1.x and break with the people who install the latest 
 version from pip,
 of they use 2.0.0 and break with everything that doesn't use the latest 
 version).

 psutil's API is extremely unstable -- it has undergone major revisions going 
 from 0.x to 1.x, and now
 1.x to 2.0.0.  Limiting psutil explicitly to a single major version (it was 
 more or less implicitly limited
 before, since there was no version major version above 1) ensures that the 
 requirements.txt file actually
 indicates what is necessary to use OpenStack.

 The alternative option would be to update the glance tests, but my concern 
 is that 2.0.0 is not available
 from the package managers of most distros yet.

 Best Regards,
 Solly Ross

 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 26, 2014 10:39:41 AM
 Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

 On 03/26/2014 10:30 AM, Solly Ross wrote:
 Hi,
 I currently have a patch up for review 
 (https://review.openstack.org/#/c/81373/) to limit psutil be 2.0.0.
 2.0.0 just came out a couple weeks ago, and breaks the API in a major way.  
 Until we can port our code to the
 latest version, I suggest we limit the version of psutil to 1.x (currently 
 there's a lower bound in the 1.x
 range, just not an upper bound).

 Which code will be broken by this if it's not done? Is there an RC bug
 tracking it?

  -Sean

 
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-26 Thread Joe Gordon
On Tue, Mar 25, 2014 at 1:49 AM, Maru Newby ma...@redhat.com wrote:


 On Mar 21, 2014, at 9:01 AM, David Kranz dkr...@redhat.com wrote:

  On 03/20/2014 04:19 PM, Rochelle.RochelleGrober wrote:
 
  -Original Message-
  From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
  Sent: Thursday, March 20, 2014 12:13 PM
 
  'project specific functional testing' in the Marconi context is
  treating
  Marconi as a complete system, making Marconi API calls  verifying the
  response - just like an end user would, but without keystone. If one of
  these tests fail, it is because there is a bug in the Marconi code ,
  and
  not because its interaction with Keystone caused it to fail.
 
  That being said there are certain cases where having a project
  specific
  functional test makes sense. For example swift has a functional test
  job
  that
  starts swift in devstack. But, those things are normally handled on a
  per
  case
  basis. In general if the project is meant to be part of the larger
  OpenStack
  ecosystem then Tempest is the place to put functional testing. That way
  you know
  it works with all of the other components. The thing is in openstack
  what
  seems
  like a project isolated functional test almost always involves another
  project
  in real use cases. (for example keystone auth with api requests)
 
  



 
  One of the concerns we heard in the review was 'having the functional
  tests elsewhere (I.e within the project itself) does not count and they
  have to be in Tempest'.
  This has made us as a team wonder if we should migrate all our
  functional
  tests to Tempest.
  But from Matt's response, I think it is reasonable to continue in our
  current path  have the functional tests in Marconi coexist  along with
  the tests in Tempest.
 
  I think that what is being asked, really is that the functional tests
 could be a single set of tests that would become a part of the tempest
 repository and that these tests would have an ENV variable as part of the
 configuration that would allow either no Keystone or Keystone or some
 such, if that is the only configuration issue that separates running the
 tests isolated vs. integrated.  The functional tests need to be as much as
 possible a single set of tests to reduce duplication and remove the
 likelihood of two sets getting out of sync with each other/development.  If
 they only run in the integrated environment, that's ok, but if you want to
 run them isolated to make debugging easier, then it should be a
 configuration option and a separate test job.
 
  So, if my assumptions are correct, QA only requires functional tests
 for integrated runs, but if the project QAs/Devs want to run isolated for
 dev and devtest purposes, more power to them.  Just keep it a single set of
 functional tests and put them in the Tempest repository so that if a
 failure happens, anyone can find the test and do the debug work without
 digging into a separate project repository.
 
  Hopefully, the tests as designed could easily take a new configuration
 directive and a short bit of work with OS QA will get the integrated FTs
 working as well as the isolated ones.
 
  --Rocky
  This issue has been much debated. There are some active members of our
 community who believe that all the functional tests should live outside of
 tempest in the projects, albeit with the same idea that such tests could be
 run either as part of today's real tempest runs or mocked in various ways
 to allow component isolation or better performance. Maru Newby posted a
 patch with an example of one way to do this but I think it expired and I
 don't have a pointer.

 I think the best place for functional api tests to be maintained is in the
 projects themselves.  The domain expertise required to write api tests is
 likely to be greater among project resources, and they should be tasked
 with writing api tests pre-merge.  The current 'merge-first, test-later'
 procedure of maintaining api tests in the Tempest repo makes that
 impossible.  Worse, the cost of developing functional api tests is higher
 in the integration environment that is the Tempest default.



If an API is made and documented properly what domain expertise would be
needed to use it? The opposite is true for tempest and the tests
themselves. The tempest team focuses on just tests so they know how to
write good tests and are able to leverage common underlying framework code.

Yes 'merge-first, test-later' is bad. But There are other ways of fixing
this then moving the tests out of tempest. Hopefully cross project
dependencies in zuul will help make this workflow easier.




 The patch in question [1] proposes allowing pre-merge functional api test
 maintenance and test reuse in an integration environment.


 m.

 1: https://review.openstack.org/#/c/72585/

  IMO there are valid arguments on both sides, but I hope every one could
 agree that functional tests should not be arbitrarily split between
 projects and 

[openstack-dev] [Neutron] Flavor framework PoC code

2014-03-26 Thread Eugene Nikanorov
Hi folks,

I've made a small patch set to illustrate the idea and usage of flavors as
I see it.
https://review.openstack.org/#/c/83055/

I think gerrit can be a good place to discuss important implementation
details on a given example service plugin, take a look at test_flavors.py
 file where it is declared.

Feel free to provide any feedback,

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

2014-03-26 Thread Clark Boylan
On Wed, Mar 26, 2014 at 12:32 PM, Solly Ross sr...@redhat.com wrote:
 What bug tracker should I file under?  I tried filing one under the openstack 
 common infrastructure tracker,
 but was told that it wasn't the correct place to file such a bug.

 Best Regards,
 Solly Ross

 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 26, 2014 3:28:32 PM
 Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

 Where is the RC bug tracking this?

 If it's that bad, we really need to be explicit about this with a
 critical bug at this stage of the release.

 -Sean

 On 03/26/2014 01:14 PM, Solly Ross wrote:
 Code which breaks:

 Glance's mutliprocessing tests will break (the reason we should limit it 
 now).
 For the future, people attempting to use psutil will have no clear version 
 target
 (Either they use 1.x and break with the people who install the latest 
 version from pip,
 of they use 2.0.0 and break with everything that doesn't use the latest 
 version).

 psutil's API is extremely unstable -- it has undergone major revisions going 
 from 0.x to 1.x, and now
 1.x to 2.0.0.  Limiting psutil explicitly to a single major version (it was 
 more or less implicitly limited
 before, since there was no version major version above 1) ensures that the 
 requirements.txt file actually
 indicates what is necessary to use OpenStack.

 The alternative option would be to update the glance tests, but my concern 
 is that 2.0.0 is not available
 from the package managers of most distros yet.

 Best Regards,
 Solly Ross

 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 26, 2014 10:39:41 AM
 Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

 On 03/26/2014 10:30 AM, Solly Ross wrote:
 Hi,
 I currently have a patch up for review 
 (https://review.openstack.org/#/c/81373/) to limit psutil be 2.0.0.
 2.0.0 just came out a couple weeks ago, and breaks the API in a major way.  
 Until we can port our code to the
 latest version, I suggest we limit the version of psutil to 1.x (currently 
 there's a lower bound in the 1.x
 range, just not an upper bound).

 Which code will be broken by this if it's not done? Is there an RC bug
 tracking it?

   -Sean



 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

You should file the bug against the projects that don't work with
latest psutil. The bug is in particular projects being incompatible
with a dependency and belongs to those projects.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-26 Thread Clint Byrum
Excerpts from Anne Gentle's message of 2014-03-26 11:49:29 -0700:
 On Wed, Mar 26, 2014 at 1:10 PM, Clint Byrum cl...@fewbar.com wrote:
 
  This is an issue that affects all of our git repos. If you are using
  oslo.config, you will likely also be using the sample config generator.
 
  However, for some reason we are all checking this generated file in.
  This makes no sense, as we humans are not editting it, and it often
  picks up config files from other things like libraries (keystoneclient
  in particular). This has lead to breakage in the gate a few times for
  Heat, perhaps for others as well.
 
  I move that we all rm this file from our git trees, and start generating
  it as part of the install/dist process (I have no idea how to do
  this..). This would require:
 
  - rm sample files and add them to .gitignore in all trees
  - Removing check_uptodate.sh from all trees/tox.ini's
  - Generating file during dist/install process.
 
  Does anyone disagree?
 
 
 The documentation currently points to the latest copy of the generated
 files when they're available. I'd like to continue having those generated
 and looked at by humans in reviews.
 
 I think if you asked non-devs, such as deployers, you'd find wider uses.
 Can you poll another group in addition to this mailing list?
 

The way I see it, the generated sample config file is itself
documentation. Perhaps that file should actually go where the docs go,
rather than sitting in the git tree.

With other libraries causing changes anyway, we're not really reviewing
every change anyway, otherwise I wouldn't have sent this message. We
weren't able to review what keystoneclient did before it broke our
gate. Keystoneclient isn't going to review the matrix of dependent repos
for breakage there either.

We do review the code changes that lead to the relevant changes in our
samples, and that _should_ be enough. It works the same with all of
our other code-born documentation (such as the Heat template guide). So
I'm comfortable saying that reviewers should be able to catch obvious
things that would break the sample configs from the code alone, in the
same way that reviewers would find such an error in other such generated
documentation.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-26 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-03-26 11:27:58 -0700:
 On Wed, Mar 26, 2014 at 11:10:04AM -0700, Clint Byrum wrote:
  This is an issue that affects all of our git repos. If you are using
  oslo.config, you will likely also be using the sample config generator.
  
  However, for some reason we are all checking this generated file in.
  This makes no sense, as we humans are not editting it, and it often
  picks up config files from other things like libraries (keystoneclient
  in particular). This has lead to breakage in the gate a few times for
  Heat, perhaps for others as well.
  
  I move that we all rm this file from our git trees, and start generating
  it as part of the install/dist process (I have no idea how to do
  this..). This would require:
  
  - rm sample files and add them to .gitignore in all trees
  - Removing check_uptodate.sh from all trees/tox.ini's
  - Generating file during dist/install process.
  
  Does anyone disagree?
 
 So this sounds like a great idea in theory, I'd love to stop getting
 surprise gate breakage every keystoneclient release becuause of minor
 changes to keystone_authtoken.
 
 My main concern is we're replacing suprise breakage due to keystoneclient
 with surprise breakage due to oslo.config, since that version is not
 capped.
 
 E.g look at this review I just posted (too hastily) - generate_sample.sh
 has done something crazy and generated a totally broken config:
 
 https://review.openstack.org/#/c/83151/
 
 I'm not quite clear on why that's broken, but it does highlight one of the
 problems with relying on autogeneration with no review.  I guess we'll get
 to review the logs of the broken gate tests instead :\
 
 I'd love to hear ideas on how we can do this in an automated way which
 won't be really unstable/unreliable.

This seems like a real bug, and one that would hopefully be handled
through the normal bug fixing cycle that includes ensuring test
coverage. No?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

2014-03-26 Thread Solly Ross
Here's the bug for Glance: https://bugs.launchpad.net/glance/+bug/1298039

- Original Message -
From: Clark Boylan clark.boy...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, March 26, 2014 4:04:12 PM
Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

On Wed, Mar 26, 2014 at 12:32 PM, Solly Ross sr...@redhat.com wrote:
 What bug tracker should I file under?  I tried filing one under the openstack 
 common infrastructure tracker,
 but was told that it wasn't the correct place to file such a bug.

 Best Regards,
 Solly Ross

 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 26, 2014 3:28:32 PM
 Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

 Where is the RC bug tracking this?

 If it's that bad, we really need to be explicit about this with a
 critical bug at this stage of the release.

 -Sean

 On 03/26/2014 01:14 PM, Solly Ross wrote:
 Code which breaks:

 Glance's mutliprocessing tests will break (the reason we should limit it 
 now).
 For the future, people attempting to use psutil will have no clear version 
 target
 (Either they use 1.x and break with the people who install the latest 
 version from pip,
 of they use 2.0.0 and break with everything that doesn't use the latest 
 version).

 psutil's API is extremely unstable -- it has undergone major revisions going 
 from 0.x to 1.x, and now
 1.x to 2.0.0.  Limiting psutil explicitly to a single major version (it was 
 more or less implicitly limited
 before, since there was no version major version above 1) ensures that the 
 requirements.txt file actually
 indicates what is necessary to use OpenStack.

 The alternative option would be to update the glance tests, but my concern 
 is that 2.0.0 is not available
 from the package managers of most distros yet.

 Best Regards,
 Solly Ross

 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 26, 2014 10:39:41 AM
 Subject: Re: [openstack-dev] [depfreeze] Exception: limit psutil to 2.0.0

 On 03/26/2014 10:30 AM, Solly Ross wrote:
 Hi,
 I currently have a patch up for review 
 (https://review.openstack.org/#/c/81373/) to limit psutil be 2.0.0.
 2.0.0 just came out a couple weeks ago, and breaks the API in a major way.  
 Until we can port our code to the
 latest version, I suggest we limit the version of psutil to 1.x (currently 
 there's a lower bound in the 1.x
 range, just not an upper bound).

 Which code will be broken by this if it's not done? Is there an RC bug
 tracking it?

   -Sean



 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

You should file the bug against the projects that don't work with
latest psutil. The bug is in particular projects being incompatible
with a dependency and belongs to those projects.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-26 Thread Eoghan Glynn


- Original Message -
 On 27 March 2014 06:28, Eoghan Glynn egl...@redhat.com wrote:
 
 
  On 3/25/2014 1:50 PM, Matt Wagner wrote:
   This would argue to me that the easiest thing for Ceilometer might be
   to query us for IPMI stats, if the credential store is pluggable.
   Fetch these bare metal statistics doesn't seem too off-course for
   Ironic to me. The alternative is that Ceilometer and Ironic would both
   have to be configured for the same pluggable credential store.
 
  There is already a blueprint with a proposed patch here for Ironic to do
  the querying:
  https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
 
  Yes, so I guess there are two fundamentally different approaches that
  could be taken here:
 
  1. ironic controls the cadence of IPMI polling, emitting notifications
 at whatever frequency it decides, carrying whatever level of
 detail/formatting it deems appropriate, which are then consumed by
 ceilometer which massages these provided data into usable samples
 
  2. ceilometer acquires the IPMI credentials either via ironic or
 directly from keystone/barbican, before calling out over IPMI at
 whatever cadence it wants and transforming these raw data into
 usable samples
 
  IIUC approach #1 is envisaged by the ironic BP[1].
 
  The advantage of approach #2 OTOH is that ceilometer is in the driving
  seat as far as cadence is concerned, and the model is far more
  consistent with how we currently acquire data from the hypervisor layer
  and SNMP daemons.
 
 The downsides of #2 are:
  - more machines require access to IPMI on the servers (if a given
 ceilometer is part of the deployed cloud, not part of the minimal
 deployment infrastructure). This sets of security red flags in some
 organisations.
  - multiple machines (ceilometer *and* Ironic) talking to the same
 IPMI device. IPMI has a limit on sessions, and in fact the controllers
 are notoriously buggy - having multiple machines talking to one IPMI
 device is a great way to exceed session limits and cause lockups.
 
 These seem fundamental showstoppers to me.

Thanks Robert, that's really useful information, and I agree a
compelling argument to invert control in this case.

Cheers,
Eoghan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-26 Thread Gergely Matefi
Also, some systems have more sophisticated IPMI topology than a single node
instance, like in case of chassis-based systems.  Some other systems might
use vendor-specific IPMI extensions or alternate platform management
protocols, that could require vendor-specific drivers to terminate.

Going for #2 might require Ceilometer to implement vendor-specific drivers
at the end, slightly overlapping what Ironic is doing today. Just from pure
architectural point of view, having a single driver is very preferrable.

Regards,
Gergely






On Wed, Mar 26, 2014 at 8:02 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 I haven't gotten to my email back log yet, but want to point out that I
 agree with everything Robert just said. I also raised these concerns on the
 original ceilometer BP, which is what gave rise to all the work in ironic
 that Haomeng has been doing (on the linked ironic BP) to expose these
 metrics for ceilometer to consume.

 Typing quickly on a mobile,
 Deva
 On Mar 26, 2014 11:34 AM, Robert Collins robe...@robertcollins.net
 wrote:

 On 27 March 2014 06:28, Eoghan Glynn egl...@redhat.com wrote:
 
 
  On 3/25/2014 1:50 PM, Matt Wagner wrote:
   This would argue to me that the easiest thing for Ceilometer might be
   to query us for IPMI stats, if the credential store is pluggable.
   Fetch these bare metal statistics doesn't seem too off-course for
   Ironic to me. The alternative is that Ceilometer and Ironic would
 both
   have to be configured for the same pluggable credential store.
 
  There is already a blueprint with a proposed patch here for Ironic to
 do
  the querying:
  https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
 
  Yes, so I guess there are two fundamentally different approaches that
  could be taken here:
 
  1. ironic controls the cadence of IPMI polling, emitting notifications
 at whatever frequency it decides, carrying whatever level of
 detail/formatting it deems appropriate, which are then consumed by
 ceilometer which massages these provided data into usable samples
 
  2. ceilometer acquires the IPMI credentials either via ironic or
 directly from keystone/barbican, before calling out over IPMI at
 whatever cadence it wants and transforming these raw data into
 usable samples
 
  IIUC approach #1 is envisaged by the ironic BP[1].
 
  The advantage of approach #2 OTOH is that ceilometer is in the driving
  seat as far as cadence is concerned, and the model is far more
  consistent with how we currently acquire data from the hypervisor layer
  and SNMP daemons.

 The downsides of #2 are:
  - more machines require access to IPMI on the servers (if a given
 ceilometer is part of the deployed cloud, not part of the minimal
 deployment infrastructure). This sets of security red flags in some
 organisations.
  - multiple machines (ceilometer *and* Ironic) talking to the same
 IPMI device. IPMI has a limit on sessions, and in fact the controllers
 are notoriously buggy - having multiple machines talking to one IPMI
 device is a great way to exceed session limits and cause lockups.

 These seem fundamental showstoppers to me.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday March 27th at 17:00UTC

2014-03-26 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, March 27th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
03:30 ACDT
18:00 CET
12:00 CDT
10:00 PDT

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-26 Thread Eoghan Glynn


 I haven't gotten to my email back log yet, but want to point out that I agree
 with everything Robert just said. I also raised these concerns on the
 original ceilometer BP, which is what gave rise to all the work in ironic
 that Haomeng has been doing (on the linked ironic BP) to expose these
 metrics for ceilometer to consume.

Thanks Devananda, so it seems like closing out the ironic work started
in the icehouse BP[1] is the way to go, while on the ceilometer side
we can look into consuming these notifications.

If Haomeng needs further input from the ceilometer side, please shout.
And if there are some non-trivial cross-cutting issues to discuss, perhaps
we could consider having another joint session at the Juno summit?

Cheers,
Eoghan

[1] https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer
 
 Typing quickly on a mobile,
 Deva
 On Mar 26, 2014 11:34 AM, Robert Collins  robe...@robertcollins.net 
 wrote:
 
 
 On 27 March 2014 06:28, Eoghan Glynn  egl...@redhat.com  wrote:
  
  
  On 3/25/2014 1:50 PM, Matt Wagner wrote:
   This would argue to me that the easiest thing for Ceilometer might be
   to query us for IPMI stats, if the credential store is pluggable.
   Fetch these bare metal statistics doesn't seem too off-course for
   Ironic to me. The alternative is that Ceilometer and Ironic would both
   have to be configured for the same pluggable credential store.
  
  There is already a blueprint with a proposed patch here for Ironic to do
  the querying:
  https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer .
  
  Yes, so I guess there are two fundamentally different approaches that
  could be taken here:
  
  1. ironic controls the cadence of IPMI polling, emitting notifications
  at whatever frequency it decides, carrying whatever level of
  detail/formatting it deems appropriate, which are then consumed by
  ceilometer which massages these provided data into usable samples
  
  2. ceilometer acquires the IPMI credentials either via ironic or
  directly from keystone/barbican, before calling out over IPMI at
  whatever cadence it wants and transforming these raw data into
  usable samples
  
  IIUC approach #1 is envisaged by the ironic BP[1].
  
  The advantage of approach #2 OTOH is that ceilometer is in the driving
  seat as far as cadence is concerned, and the model is far more
  consistent with how we currently acquire data from the hypervisor layer
  and SNMP daemons.
 
 The downsides of #2 are:
 - more machines require access to IPMI on the servers (if a given
 ceilometer is part of the deployed cloud, not part of the minimal
 deployment infrastructure). This sets of security red flags in some
 organisations.
 - multiple machines (ceilometer *and* Ironic) talking to the same
 IPMI device. IPMI has a limit on sessions, and in fact the controllers
 are notoriously buggy - having multiple machines talking to one IPMI
 device is a great way to exceed session limits and cause lockups.
 
 These seem fundamental showstoppers to me.
 
 -Rob
 
 --
 Robert Collins  rbtcoll...@hp.com 
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Vishvananda Ishaya

On Mar 26, 2014, at 11:40 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Wed, 2014-03-26 at 09:47 -0700, Vishvananda Ishaya wrote:
 Personally I view this as a bug. There is no reason why we shouldn’t
 support arbitrary grouping of zones. I know there is at least one
 problem with zones that overlap regarding displaying them properly:
 
 https://bugs.launchpad.net/nova/+bug/1277230
 
 There is probably a related issue that is causing the error you see
 below. IMO both of these should be fixed. I also think adding a
 compute node to two different aggregates with azs should be allowed.
 
 It also might be nice to support specifying multiple zones in the
 launch command in these models. This would allow you to limit booting
 to an intersection of two overlapping zones.
 
 A few examples where these ideas would be useful:
 
 1. You have 3 racks of servers and half of the nodes from each rack
 plugged into a different switch. You want to be able to specify to
 spread across racks or switches via an AZ. In this model you could
 have a zone for each switch and a zone for each rack.
 
 2. A single cloud has 5 racks in one room in the datacenter and 5
 racks in a second room. You’d like to give control to the user to
 choose the room or choose the rack. In this model you would have one
 zone for each room, and smaller zones for each rack.
 
 3. You have a small 3 rack cloud and would like to ensure that your
 production workloads don’t run on the same machines as your dev
 workloads, but you also want to use zones spread workloads across the
 three racks. Similarly to 1., you could split your racks in half via
 dev and prod zones. Each one of these zones would overlap with a rack
 zone.
 
 You can achieve similar results in these situations by making small
 zones (switch1-rack1 switch1-rack2 switch1-rack3 switch2-rack1
 switch2-rack2 switch2-rack3) but that removes the ability to decide to
 launch something with less granularity. I.e. you can’t just specify
 ‘switch1' or ‘rack1' or ‘anywhere’
 
 I’d like to see all of the following work
 nova boot … (boot anywhere)
 nova boot —availability-zone switch1 … (boot it switch1 zone)
 nova boot —availability-zone rack1 … (boot in rack1 zone)
 nova boot —availability-zone switch1,rack1 … (boot
 
 Personally, I feel it is a mistake to continue to use the Amazon concept
 of an availability zone in OpenStack, as it brings with it the
 connotation from AWS EC2 that each zone is an independent failure
 domain. This characteristic of EC2 availability zones is not enforced in
 OpenStack Nova or Cinder, and therefore creates a false expectation for
 Nova users.
 
 In addition to the above problem with incongruent expectations, the
 other problem with Nova's use of the EC2 availability zone concept is
 that availability zones are not hierarchical -- due to the fact that EC2
 AZs are independent failure domains. Not having the possibility of
 structuring AZs hierarchically limits the ways in which Nova may be
 deployed -- just see the cells API for the manifestation of this
 problem.
 
 I would love it if the next version of the Nova and Cinder APIs would
 drop the concept of an EC2 availability zone and introduce the concept
 of a generic region structure that can be infinitely hierarchical in
 nature. This would enable all of Vish's nova boot commands above in an
 even simpler fashion. For example:
 
 Assume a simple region hierarchy like so:
 
  regionA
  /  \
 regionBregionC
 
 # User wants to boot in region B
 nova boot --region regionB
 # User wants to boot in either region B or region C
 nova boot --region regionA

I think the overlapping zones allows for this and also enables additional use
cases as mentioned in my earlier email. Hierarchical doesn’t work for the
rack/switch model. I’m definitely +1 on breaking from the amazon usage
of availability zones but I’m a bit leery to add another parameter to
the create request. It is also unfortunate that region already has a meaning
in the amazon world which will add confusion.

Vish

 
 I think of the EC2 availability zone concept in the Nova and Cinder APIs
 as just another example of implementation leaking out of the API. The
 fact that EC2 availability zones are implemented as independent failure
 domains and thus have a non-hierarchical structure has caused the Nova
 API to look and feel a certain way that locks the API into the
 implementation of a non-OpenStack product.
 
 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-26 Thread Sylvain Bauza
I can't agree more on this. Although the name sounds identical to AWS, Nova
AZs are *not* for segregating compute nodes, but rather exposing to users a
certain sort of grouping.
Please see this pointer for more info if needed :
http://russellbryantnet.wordpress.com/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/

Regarding the bug mentioned by Vish [1], I'm the owner of it. I took it a
while ago, but things and priorities changed so I can take a look over it
this week and hope to deliver a patch by next week.

Thanks,
-Sylvain

[1] https://bugs.launchpad.net/nova/+bug/1277230




2014-03-26 19:00 GMT+01:00 Chris Friesen chris.frie...@windriver.com:

 On 03/26/2014 11:17 AM, Khanh-Toan Tran wrote:

  I don't know why you need a
 compute node that belongs to 2 different availability-zones. Maybe
 I'm wrong but for me it's logical that availability-zones do not
 share the same compute nodes. The availability-zones have the role
 of partition your compute nodes into zones that are physically
 separated (in large term it would require separation of physical
 servers, networking equipments, power sources, etc). So that when
 user deploys 2 VMs in 2 different zones, he knows that these VMs do
 not fall into a same host and if some zone falls, the others continue
 working, thus the client will not lose all of his VMs.


 See Vish's email.

 Even under the original meaning of availability zones you could
 realistically have multiple orthogonal availability zones based on room,
 or rack, or network, or dev vs production, or even has_ssds and a
 compute node could reasonably be part of several different zones because
 they're logically in different namespaces.

 Then an end-user could boot an instance, specifying networkA, dev, and
 has_ssds and only hosts that are part of all three zones would match.

 Even if they're not used for orthogonal purposes, multiple availability
 zones might make sense.  Currently availability zones are the only way an
 end-user has to specify anything about the compute host he wants to run on.
  So it's not entirely surprising that people might want to overload them
 for purposes other than physical partitioning of machines.

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-26 Thread John Griffith
On Wed, Mar 26, 2014 at 12:18 PM, Eric Harney ehar...@redhat.com wrote:

 On 03/25/2014 11:07 AM, Shlomi Sasson wrote:

  I am not sure what will be the right approach to handle this, I already
 have the code, should I open a bug or blueprint to track this issue?
 
  Best Regards,
  Shlomi
 
 

 A blueprint around this would be appreciated.  I have had similar
 thoughts around this myself, that these should be options for the LVM
 iSCSI driver rather than different drivers.

 These options also mirror how we can choose between tgt/iet/lio in the
 LVM driver today.  I've been assuming that RDMA support will be added to
 the LIO driver there at some point, and this seems like a nice way to
 enable that.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'm open to improving this, but I am curious you know there's an ISER
subclass in iscsi for Cinder currently right?
http://goo.gl/kQJoDO
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-26 Thread Sean Dague
On 03/25/2014 04:49 AM, Maru Newby wrote:
 
 On Mar 21, 2014, at 9:01 AM, David Kranz dkr...@redhat.com wrote:
 
 On 03/20/2014 04:19 PM, Rochelle.RochelleGrober wrote:

 -Original Message-
 From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
 Sent: Thursday, March 20, 2014 12:13 PM

 'project specific functional testing' in the Marconi context is
 treating
 Marconi as a complete system, making Marconi API calls  verifying the
 response - just like an end user would, but without keystone. If one of
 these tests fail, it is because there is a bug in the Marconi code ,
 and
 not because its interaction with Keystone caused it to fail.

 That being said there are certain cases where having a project
 specific
 functional test makes sense. For example swift has a functional test
 job
 that
 starts swift in devstack. But, those things are normally handled on a
 per
 case
 basis. In general if the project is meant to be part of the larger
 OpenStack
 ecosystem then Tempest is the place to put functional testing. That way
 you know
 it works with all of the other components. The thing is in openstack
 what
 seems
 like a project isolated functional test almost always involves another
 project
 in real use cases. (for example keystone auth with api requests)

 
 
 
 

 One of the concerns we heard in the review was 'having the functional
 tests elsewhere (I.e within the project itself) does not count and they
 have to be in Tempest'.
 This has made us as a team wonder if we should migrate all our
 functional
 tests to Tempest.
 But from Matt's response, I think it is reasonable to continue in our
 current path  have the functional tests in Marconi coexist  along with
 the tests in Tempest.

 I think that what is being asked, really is that the functional tests could 
 be a single set of tests that would become a part of the tempest repository 
 and that these tests would have an ENV variable as part of the 
 configuration that would allow either no Keystone or Keystone or some 
 such, if that is the only configuration issue that separates running the 
 tests isolated vs. integrated.  The functional tests need to be as much as 
 possible a single set of tests to reduce duplication and remove the 
 likelihood of two sets getting out of sync with each other/development.  If 
 they only run in the integrated environment, that's ok, but if you want to 
 run them isolated to make debugging easier, then it should be a 
 configuration option and a separate test job.

 So, if my assumptions are correct, QA only requires functional tests for 
 integrated runs, but if the project QAs/Devs want to run isolated for dev 
 and devtest purposes, more power to them.  Just keep it a single set of 
 functional tests and put them in the Tempest repository so that if a 
 failure happens, anyone can find the test and do the debug work without 
 digging into a separate project repository.

 Hopefully, the tests as designed could easily take a new configuration 
 directive and a short bit of work with OS QA will get the integrated FTs 
 working as well as the isolated ones.

 --Rocky
 This issue has been much debated. There are some active members of our 
 community who believe that all the functional tests should live outside of 
 tempest in the projects, albeit with the same idea that such tests could be 
 run either as part of today's real tempest runs or mocked in various ways 
 to allow component isolation or better performance. Maru Newby posted a 
 patch with an example of one way to do this but I think it expired and I 
 don't have a pointer.
 
 I think the best place for functional api tests to be maintained is in the 
 projects themselves.  The domain expertise required to write api tests is 
 likely to be greater among project resources, and they should be tasked with 
 writing api tests pre-merge.  The current 'merge-first, test-later' procedure 
 of maintaining api tests in the Tempest repo makes that impossible.  Worse, 
 the cost of developing functional api tests is higher in the integration 
 environment that is the Tempest default.

I disagree. I think all that ends up doing is creating greater variance
in quality between components in OpenStack. And it means now no one
feels responsible when a bad test in a project some where causes a gate
block.

If a core project team can't be bothered to work in the docs, infra, and
qa, and other project repositories repos as part normal flow, that's
core project is very clearly not integrated with the rest of OpenStack.

Being integrated needs to not just be a badge you get from the TC on a
vote, it actually means being integrated, beyond just your own git
tree that you have +2 on.

 The patch in question [1] proposes allowing pre-merge functional api test 
 maintenance and test reuse in an integration environment.
 
 
 m.
 
 1: https://review.openstack.org/#/c/72585/

This effort is interesting, but I feel like it's so far down on maslo's
hierarchy of needs, 

Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

2014-03-26 Thread Dmitri Zimine
=== Long-running delegate [1] actions == 

Yes, the third model of lazy / passive engine is needed. 

Obviously workflows contain a mix of different tasks, so this 3rd model should 
handle both normal tasks (run on a workers and return) and long running 
delegates. The active mechanism which is alive during the process, currently 
in done by TaskFlow engine,  may be moved from the TaskFlow library to a client 
(Mistral) which implements the watchdog. This  may require a lower-level API to 
TaskFlow. 

The benefit of the model 2 is 'ease of use' for some clients (create tasks, 
define flow, instantiate engine, engine.run(), that's it!). But I agree that 
the model 2 - worker-based TaskFlow engine - won't scale to WFaaS requirements 
even though the engine is not doing much. 

Mistral POC implements  a passive, lazy workflow model: a service moving the 
states of multiple parallel executions. I'll detail the how Mistral handles 
long running tasks in a separate thread (may be here 
http://tinyurl.com/n3v9lt8) and we can look at how TaskFlow may change to fit. 

DZ 

PS. Thanks for clarifications on the target use cases for the execution models! 


[1] Calling them 'delegate actions' to distinguish between long running 
computations on workers, and actions that delegate to 3rd party systems (hadoop 
job, human input gateway, etc).


On Mar 24, 2014, at 11:51 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:

 So getting back to this thread.
 
 I'd like to split it up into a few sections to address the HA and 
 long-running-actions cases, which I believe are 2 seperate (but connected) 
 questions.
 
 === Long-running actions ===
 
 First, let me describe a little bit about what I believe are the execution 
 models that taskflow currently targets (but is not limited to just targeting 
 in general). 
 
 The first execution model I would call the local execution model, this model 
 involves forming tasks and flows and then executing them inside an 
 application, that application is running for the duration of the workflow 
 (although if it crashes it can re-establish the task and flows that it was 
 doing and attempt to resume them). This could also be what openstack projects 
 would call the 'conductor' approach where nova, ironic, trove have a 
 conductor which manages these long-running actions (the conductor is 
 alive/running throughout the duration of these workflows, although it may be 
 restarted while running). The restarting + resuming part is something that 
 openstack hasn't handled so gracefully currently, typically requiring either 
 some type of cleanup at restart (or by operations), with taskflow using this 
 model the resumption part makes it possible to resume from the last saved 
 state (this connects into the persistence model that taskflow uses, the state 
 transitions, how execution occurrs itself...). 
 
 The second execution model is an extension of the first, whereby there is 
 still a type of 'conductor' that is managing the life-time of the workflow, 
 but instead of locally executing tasks in the conductor itself tasks are now 
 executed on remote-workers (see http://tinyurl.com/lf3yqe4
 ). The engine currently still is 'alive' for the life-time of the execution, 
 although the work that it is doing is relatively minimal (since its not 
 actually executing any task code, but proxying those requests to others 
 works). The engine while running does the conducting of the remote-workers 
 (saving persistence details, doing state-transtions, getting results, sending 
 requests to workers...).
 
 As you have already stated, if a task is going to run for 5+ days (some 
 really long hadoop job for example) then these 2 execution models may not be 
 suited for this type of usage due to the current requirement that the engine 
 overseeing the work must be kept alive (since something needs to recieve 
 responses and deal with state transitions and persistence). If the desire is 
 to have a third execution model, one that can handle with extremly 
 long-running tasks without needing an active mechanism that is 'alive' during 
 this process then I believe that would call for the creation of a new engine 
 type in taskflow 
 (https://github.com/openstack/taskflow/tree/master/taskflow/engines) that 
 deals with this use-case. I don't beleive it would be hard to create this 
 engine type although it would involve more complexity than what exists. 
 Especially since there needs to be some 'endpoint' that recieves responses 
 when the 5+ day job actually finishes (so in this manner some type of code 
 must be 'always' running to deal with these responses anyway). So that means 
 there would likely need to be a 'watchdog' process that would always be 
 running that itself would do the state-transitions and result persistence 
 (and so-on), in a way this would be a 'lazy' version of the above 
 first/second execution models. 
 
 === HA ===
 
 So this is an interesting question, and to me is strongly 

Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

2014-03-26 Thread Joshua Harlow
Cool, sounds great.

I think all 3 models can co-exist (since each serves a good purpose), it'd be 
intersting to see how the POC 'engine' can become a taskflow 'engine' (aka the 
lazy_engine).

As to scalability I agree lazy_engine would be nicer, but how much more 
scalable is a tough one to quantify (the openstack systems that have active 
conductors, aka model #2, seem to scale pretty well).

Of course there are some interesting questions laziness brings up; it'd be 
interesting to see how the POC addressed them.

Some questions I can think of (currently), maybe u can address them in the 
other thread (which is fine to).

What does the watchdog do? Is it activated periodically to 'reap' jobs that 
have timed out (or have gone past some time limit)? How does the watchdog know 
that it is reaping jobs that are not actively being worked on (a timeout likely 
isn't sufficient for jobs that just take a very long time)? Is there a 
connection into zookepeer (or some similar system) to do this kind of 
'liveness' verification instead? What does the watchdog do when reaping tasks? 
(revert them, retry them, other..?)

I'm not quite sure how a taskflow would use mistral as a client for this 
watchdog, since the watchdog process is pretty key to the lazy_engines 
execution model and it seems like it would be a bad idea to split that logic 
from the actual execution model itself (seeing that the watchdog is involved in 
the execution process, and really isn't external to it). To me the concept of 
the lazy_engine is similar to the case where an engine 'crashes' while running, 
in a way the lazy_engine 'crashes on purpose' after asking a set of workers to 
do some action (and hands over the resumption of 'itself' to this watchdog 
process). The watchdog then watches over the workers, and on response from some 
worker the watchdog resumes the engine and then lets the engine 'crash on 
purpose' again (and repeat). So the watchdog - lazy_engine execution model 
seems to be pretty interconnected.

-Josh

From: Dmitri Zimine d...@stackstorm.commailto:d...@stackstorm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, March 26, 2014 at 2:12 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

=== Long-running delegate [1] actions ==

Yes, the third model of lazy / passive engine is needed.

Obviously workflows contain a mix of different tasks, so this 3rd model should 
handle both normal tasks (run on a workers and return) and long running 
delegates. The active mechanism which is alive during the process, currently 
in done by TaskFlow engine,  may be moved from the TaskFlow library to a client 
(Mistral) which implements the watchdog. This  may require a lower-level API to 
TaskFlow.

The benefit of the model 2 is 'ease of use' for some clients (create tasks, 
define flow, instantiate engine, engine.run(), that's it!). But I agree that 
the model 2 - worker-based TaskFlow engine - won't scale to WFaaS requirements 
even though the engine is not doing much.

Mistral POC implements  a passive, lazy workflow model: a service moving the 
states of multiple parallel executions. I'll detail the how Mistral handles 
long running tasks in a separate thread (may be here 
http://tinyurl.com/n3v9lt8) and we can look at how TaskFlow may change to fit.

DZ

PS. Thanks for clarifications on the target use cases for the execution models!


[1] Calling them 'delegate actions' to distinguish between long running 
computations on workers, and actions that delegate to 3rd party systems (hadoop 
job, human input gateway, etc).


On Mar 24, 2014, at 11:51 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

So getting back to this thread.

I'd like to split it up into a few sections to address the HA and 
long-running-actions cases, which I believe are 2 seperate (but connected) 
questions.

=== Long-running actions ===

First, let me describe a little bit about what I believe are the execution 
models that taskflow currently targets (but is not limited to just targeting in 
general).

The first execution model I would call the local execution model, this model 
involves forming tasks and flows and then executing them inside an application, 
that application is running for the duration of the workflow (although if it 
crashes it can re-establish the task and flows that it was doing and attempt to 
resume them). This could also be what openstack projects would call the 
'conductor' approach where nova, ironic, trove have a conductor which manages 
these long-running actions (the conductor is alive/running throughout the 
duration of these workflows, although it may be restarted while running). The 
restarting + resuming part is something that 

Re: [openstack-dev] [Neutron][ML2][Ml2Plugin] Setting _original_network in NetworkContext:

2014-03-26 Thread Andre Pech
Hi Nader,

When I wrote this, the intention was that original_network only really
makes sense during an update_network call (ie when there's an existing
network that you are modifying). In a create_network call, the assumption
is that no network exists yet, so there is no original network to set.

Can you provide a bit more detail on the case where there's an existing
network when create_network is called? Sorry, I didn't totally follow when
this would happen.

Thanks
Andre


On Tue, Mar 25, 2014 at 8:45 AM, Nader Lahouti nader.laho...@gmail.comwrote:

 Hi All,

 In the current Ml2Plugin code when 'create_network' is called, as shown
 below:



 def create_network(self, context, network)

 net_data = network['network']

 ...

 session = context.session

 with session.begin(subtransactions=True):

 self._ensure_default_security_group(context, tenant_id)

 result = super(Ml2Plugin, self).create_network(context,
 network)
 ...

 mech_context = driver_context.NetworkContext(self, context,
 result)

 self.mechanism_manager.create_network_precommit(mech_context)

 ...



 the original_network parameter is not set (the default is None) when
 instantiating NetworkContext, and as a result the mech_context has only the
 value of network object returned from super(Ml2Plugin,
 self).create_network().

 This causes issue when a mechanism driver needs to use the original
 network parameters (given to the create_network), specially when extension
 is used for the network resources.

 (The 'result' only has the network attributes without extension which is
 used to set the '_network' in the NetwrokContext object).

 Even using  extension function registration using

 db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(...) won't
 help as the network object that is passed to the registered function does
 not include the extension parameters.


 Is there any reason that the original_network is not set when initializing
 the NetworkContext? Would that cause any issue to set it to 'net_data' so
 that any mechanism driver can use original network parameters as they are
 available when create_network is called?


 Appreciate your comments.


 Thanks,

 Nader.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] retrospective on IceHouse and a call to action for Juno

2014-03-26 Thread Shawn Hartsock
Next week during the VMwareAPI subteam meeting I would like to discuss
blueprint priority order and tentative scheduling for Juno. I have a
proposal for the order that I would like to conduct a formal vote on
and I hope that we as a community can abide by the vote's results.

In short, we currently have a number of blueprints in flight that were
icehouse near-misses and new features are already going to be starved
fro reviewer attention. Adding *more* features is likely to make the
problem worse.

I am advocating for refactorings-first and features later.

If you've not read:
http://lists.openstack.org/pipermail/openstack-dev/2014-February/028077.html

Please do. It's good background and dove-tails with this topic.

There is a tl;dr at the end.

== Summary ==

I used to send out weekly blueprint, bug, and review tracking emails
focused on VMware related changes. I've stopped doing that. The reason
I have is that I have not seen a return on the investment of making
those updates to the community. In this public retrospective on
IceHouse, I hope that I will shed light on which practices were
working and which were not.

== A description of the problem ==

We can't get features merged upstream. Many people are expending
effort and this effort is not being rewarded and the driver's
evolution is suffering for it.

I have been observing the VMware drivers' development since Havana
opened for accepting submissions back in early 2013 and I think we
have a pattern that we as a community need to address. By community, I
mean those of us committing to the vmwareapi drivers in Nova.

I recall working with developers in the broader community (not VMware
employees) to get new features into the Nova driver for vCenter. And I
recall intimately that we just missed merging in Havana-1. In fact, of
the blueprints I had been tracking back then, no blueprints merged and
they were all slid to Havana-2. We worked very hard and most
blueprints missed Havana-3 with only a handful of exceptions.

During Havana I refrained from large change suggestions because I was
new to the community and any such change risked blowing up other
developers work. Big changes can be very disruptive even if they are
for good causes. So no major refactoring work occurred.

In IceHouse I started tracking things much more thoroughly. This was
the first time we had a significant number of developers to coordinate
and we had in the neighborhood of a dozen blueprints to suggest adding
features to the Nova driver for vCenter in IceHouse. A significant
number of these were ready (by our group standards) for IceHouse-2.
These all slipped to IceHouse-3 in the same manner all blueprints for
H2 had slipped. Finally, I3 followed the same pattern as H3 with only
a small set of features surviving the gauntlet.

In IceHouse, only two of the dozen blueprints we as a driver sub-team
had in flight managed to land. In the linked retrospective detail
paste I've managed to consolidate notes I made throughout IceHouse on
blueprint progress. Snapshots of these notes are publicly available on
the IRC logs for the VMwareAPI sub-team if anyone would like to verify
my summary of events.

IceHouse retrospective detail:
  http://paste.openstack.org/raw/74393/

VMwareAPI team meeting details:
https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Next_Meeting

== Learning from Successes ==

Of the thousands of person hours spent by VMware staff and non-staff
working on the VMwareAPI drivers only a handful of feature patches
merged. Why is that?

I have listed all the feature patches that merged that I was able to
find quickly in the previous link on retrospective detail. One
particularly difficult merge was
https://review.openstack.org/#/c/56416/ standing at an astonishing 74
revisions and four months of concentrated effort to achieve a change
of 744 lines in a driver with a total line count on the order of
13,000 lines (including the tests.) This is an 5.6% change in the
driver's code base costing 4 months of effort and thousands of person
hours between multiple companies. Not to mention the developer's
personal sacrifice as they worked nights and weekends to make those
744 lines happen.

In that time we see that the code in review enters conflicts with
another high priority feature:
* https://review.openstack.org/#/c/56416/60/nova/virt/vmwareapi/vmops.py

Which causes both blueprints to be revised
* https://review.openstack.org/#/c/63084/23/nova/virt/vmwareapi/vmops.py

March 6th becomes a very busy and confusing day as the two attention
starved BP are wrestled into the code base. I'll leave parsing the
details to the reader as an exercise. The interaction between these
two patches is interesting enough to be worth closer examination.

== A common complaint ==

Common complaints about the Nova vmware driver that you will find
elsewhere on this mailing list include (paraphrased):

* I can't tell where something is tested or how
* The code is hard to follow so I hate reviewing that 

  1   2   >