Re: [openstack-dev] [Fuel} Separating Ceph pools depending on storage type

2015-03-20 Thread Andrew Woodward
Right now, we create pools for images, compute, volumes, and radosgw
creates a bunch, all are assigned to the default crush map.

from the Ceph side, In order to create a pool where we could separate
it from another pool is to create a ruleset in the cursh map to
isolate the devices, then the pools can be mapped to it. So once we
create new ruleset, which pools would we map to it? do we create
separate ones for each type of service? Just cinder?

From the fuel side, how would we distinguish between devices, and what
do we assign them to? a pool (limited) or a ruleset (flexible, but
then we need to code putting pools on them).

Then to put them together, we are going to need a data driven crush
tool writing, which I haven't seen.

While here there we also need to think about how we might want to set
replication between NodeGroups (which tend to logically map to a rack
in a spine and leaf topology).

I'd also loop in the puppet-openstack guys, because at some point I'd
like to switch to the upstream module

On Thu, Mar 19, 2015 at 7:12 PM, Rogon, Kamil kamil.ro...@intel.com wrote:
 Hello,

 I want to initiate a discussion about different backend storage types for
 Ceph. Now all types of drives (HDD, SAS, SSD) are treated the same way, so
 the performance can vary widely.

 It would be good to detect SSD drives and create separate Ceph pool for
 them. From the user perspective, it should be able to select pool when
 scheduling an instance (scenario for high-IOPS needed VM, like database).

 Regards,
 Kamil Rogon
 
 ---
 Intel Technology Poland sp. z o.o.
 KRS 101882
 ul. Slowackiego 173
 80-298 Gdansk



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Fuel community ambassador
Ceph community

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Creating roles with fuel client

2015-03-20 Thread Dmitriy Shulyak
Hi team,

I wasnt able to participate in fuel weekly meeting, so for those of you who
are curious
how to create roles with fuel client - here is documentation on this topic
[1].

And here is example how it can be used, together with granular deployment,
to
create new roles and add deployment logic for those roles - [2].

[1] https://review.openstack.org/#/c/162085/
[2]
https://review.openstack.org/#/c/161192/7/pages/reference-architecture/task-deployment/0060-add-new-role.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][horizon] Need help at debugging requirements issue

2015-03-20 Thread Matthias Runge
On 19/03/15 15:52, Ihar Hrachyshka wrote:

 [1] https://review.openstack.org/#/c/155353/
 
 
 Hi,
 
 it all comes to the fact that DEVSTACK_GATE_INSTALL_TESTONLY=1 is not
 specified in the requirements integration job. I think you need to set
 it at [1]. In that case, your test requirements will also be installed
 during the job.
 
Thanks Ihar,

the issue is, the only change was, to remove the upper boundary from

Django=1.4.2,1.7

And removing 1.7 from that line resulted in not installing
django-nose any more.

Matthias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel} Separating Ceph pools depending on storage type

2015-03-20 Thread Federico Michele Facca
hi,
generally speaking it would be nice to have the possibility to define
availability zones. and this could be used as well to group, not only
computing resources, but also storage ones. for this if i am not wrong
there is already a discussion or blueprint on this from mirantis folks.
then i am not sure this would be exactly what you need :)

best,
federico

On Fri, Mar 20, 2015 at 3:12 AM, Rogon, Kamil kamil.ro...@intel.com wrote:

 Hello,

 I want to initiate a discussion about different backend storage types for
 Ceph. Now all types of drives (HDD, SAS, SSD) are treated the same way, so
 the performance can vary widely.

 It would be good to detect SSD drives and create separate Ceph pool for
 them. From the user perspective, it should be able to select pool when
 scheduling an instance (scenario for high-IOPS needed VM, like database).

 Regards,
 Kamil Rogon

 
 ---
 Intel Technology Poland sp. z o.o.
 KRS 101882
 ul. Slowackiego 173
 80-298 Gdansk



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Future Internet is closer than you think!
http://www.fiware.org

Official Mirantis partner for OpenStack Training
https://www.create-net.org/community/openstack-training

-- 
Dr. Federico M. Facca

CREATE-NET
Via alla Cascata 56/D
38123 Povo Trento (Italy)

P  +39 0461 312471
M +39 334 6049758
E  federico.fa...@create-net.org
T @chicco785
W  www.create-net.org
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN transparency support

2015-03-20 Thread Akihiro Motoki
API extension is the only way that users know which features are
available unitl we support API microversioning (v2.1 or something).
I believe VLAN transparency support should be implemented as an
extension, not by changing the core resources attribute directly.
Otherwise users (including Horizon) cannot know we field is available or not.

Even though VLAN transparency and MTU suppotrs are basic features, it
is better to be implemented as an extension.
Configuration does not help from API perspective as it is not visible
through the API.

We are discussing moving away from extension attributes as Armando commented,
but I think it is discussed about resources/attributes which are
already used well and required.
It looks natural to me that new resources/attributes are implemented
via an extension.
The situation may be changed once we have support of API microversioning.
(It is being discussed in the context of Nova API microvesioning in
the dev list started by Jay Pipes.)

In my understanding, the case of IPv6 two mode is an exception.
From the initial design we would like to have fully support of IPv6 in
subnet resource,
but through the discussion of IPv6 support it turns out some more
modes are required,
and we decided to change the subnet core resource. It is the exception.

Thanks,
Akihiro


2015-03-20 7:33 GMT+09:00 Armando M. arma...@gmail.com:
 If my memory does not fail me, changes to the API (new resources, new
 resource attributes or new operations allowed to resources) have always been
 done according to these criteria:

 an opt-in approach: this means we know the expected behavior of the plugin
 as someone has coded the plugin in such a way that the API change is
 supported;
 an opt-out approach: if the API change does not require explicit backend
 support, and hence can be deemed supported by all plugins.
 a 'core' extension (ones available in neutron/extensions) should be
 implemented at least by the reference implementation;

 Now, there might have been examples in the past where criteria were not met,
 but these should be seen as exceptions rather than the rule, and as such,
 fixed as defects so that an attribute/resource/operation that is
 accidentally exposed to a plugin will either be honored as expected or an
 appropriate failure is propagated to the user. Bottom line, the server must
 avoid to fail silently, because failing silently is bad for the user.

 Now both features [1] and [2] violated the opt-in criterion above: they
 introduced resources attributes in the core models, forcing an undetermined
 behavior on plugins.

 I think that keeping [3,4] as is can lead to a poor user experience; IMO
 it's unacceptable to let a user specify the attribute, and see that
 ultimately the plugin does not support it. I'd be fine if this was an
 accident, but doing this by design is a bit evil. So, I'd suggest the
 following, in order to keep the features in Kilo:

 Patches [3, 4] did introduce config flags to control the plugin behavior,
 but it looks like they were not applied correctly; for instance, the
 vlan_transparent case was only applied to ML2. Similarly the MTU config flag
 was not processed server side to ensure that plugins that do not support
 advertisement do not fail silently. This needs to be rectified.
 As for VLAN transparency, we'd need to implement work item 5 (of 6) of spec
 [2], as this extension without at least a backend able to let tagged traffic
 pass doesn't seem right.
 Ensure we sort out the API tests so that we know how the features behave.

 Now granted that controlling the API via config flags is not the best
 solution, as this was always handled through the extension mechanism, but
 since we've been talking about moving away from extension attributes with
 [5], it does sound like a reasonable stop-gap solution.

 Thoughts?
 Armando

 [1]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
 [2]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
 [3]
 https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
 [4]
 https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/nfv-vlan-trunks,n,z
 [5] https://review.openstack.org/#/c/136760/

 On 19 March 2015 at 12:01, Gary Kotton gkot...@vmware.com wrote:

 With regards to the MTU can you please point me to where we validate that
 the MTU defined by the tenant is actually = the supported MTU on the
 network. I did not see this in the code (maybe I missed something).


 From: Ian Wells ijw.ubu...@cack.org.uk
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Thursday, March 19, 2015 at 8:44 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] VLAN transparency support

 Per the other discussion on attributes, I believe the change walks in
 historical footsteps and it's a matter of project policy 

[openstack-dev] Feature freeze + Kilo-3 development milestone available

2015-03-20 Thread Thierry Carrez
Hi everyone,

We just hit feature freeze[1], so please do not approve changes that add
features or new configuration options unless those have been granted a
feature freeze exception.

This is also string freeze[2], so you should avoid changing translatable
strings. If you have to modify a translatable string, you should give a
heads-up to the I18N team.

Finally, this is also DepFreeze[3], so you should avoid adding new
dependencies (bumping oslo or openstack client libraries is OK until
RC1). If you have a new dependency to add, raise a thread on
openstack-dev about it.

The kilo-3 development milestone was tagged, it contains more than 200
features and 825 bugfixes added since the kilo-2 milestone 6 weeks ago
(not even counting the Oslo libraries in the mix !). You can find the
full list of new features and fixed bugs, as well as tarball downloads, at:

https://launchpad.net/keystone/kilo/kilo-3
https://launchpad.net/glance/kilo/kilo-3
https://launchpad.net/nova/kilo/kilo-3
https://launchpad.net/horizon/kilo/kilo-3
https://launchpad.net/neutron/kilo/kilo-3
https://launchpad.net/cinder/kilo/kilo-3
https://launchpad.net/ceilometer/kilo/kilo-3
https://launchpad.net/heat/kilo/kilo-3
https://launchpad.net/trove/kilo/kilo-3
https://launchpad.net/sahara/kilo/kilo-3
https://launchpad.net/ironic/kilo/kilo-3

Congrats to all the PTLs and release management liaisons who made us
reach this important milestone in the Kilo development cycle !

Regards,

[1] https://wiki.openstack.org/wiki/FeatureFreeze
[2] https://wiki.openstack.org/wiki/StringFreeze
[3] https://wiki.openstack.org/wiki/DepFreeze

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-20 Thread Akihiro Motoki
Forwarding my reply to the other thread too...

Multiple threads on the same topic is confusing.
Can we use this thread if we continue the discussion?
(The title of this thread looks approapriate)


API extension is the only way that users know which features are
available unitl we support API microversioning (v2.1 or something).
I believe VLAN transparency support should be implemented as an
extension, not by changing the core resources attribute directly.
Otherwise users (including Horizon) cannot know we field is available or not.

Even though VLAN transparency and MTU suppotrs are basic features, it
is better to be implemented as an extension.
Configuration does not help from API perspective as it is not visible
through the API.

We are discussing moving away from extension attributes as Armando commented,
but I think it is discussed about resources/attributes which are
already used well and required.
It looks natural to me that new resources/attributes are implemented
via an extension.
The situation may be changed once we have support of API microversioning.
(It is being discussed in the context of Nova API microvesioning in
the dev list started by Jay Pipes.)

In my understanding, the case of IPv6 two mode is an exception.
From the initial design we would like to have fully support of IPv6 in
subnet resource,
but through the discussion of IPv6 support it turns out some more
modes are required,
and we decided to change the subnet core resource. It is the exception.

Thanks,
Akihiro

2015-03-20 8:23 GMT+09:00 Armando M. arma...@gmail.com:
 Forwarding my reply to the other thread here:

 

 If my memory does not fail me, changes to the API (new resources, new
 resource attributes or new operations allowed to resources) have always been
 done according to these criteria:

 an opt-in approach: this means we know the expected behavior of the plugin
 as someone has coded the plugin in such a way that the API change is
 supported;
 an opt-out approach: if the API change does not require explicit backend
 support, and hence can be deemed supported by all plugins.
 a 'core' extension (ones available in neutron/extensions) should be
 implemented at least by the reference implementation;

 Now, there might have been examples in the past where criteria were not met,
 but these should be seen as exceptions rather than the rule, and as such,
 fixed as defects so that an attribute/resource/operation that is
 accidentally exposed to a plugin will either be honored as expected or an
 appropriate failure is propagated to the user. Bottom line, the server must
 avoid to fail silently, because failing silently is bad for the user.

 Now both features [1] and [2] violated the opt-in criterion above: they
 introduced resources attributes in the core models, forcing an undetermined
 behavior on plugins.

 I think that keeping [3,4] as is can lead to a poor user experience; IMO
 it's unacceptable to let a user specify the attribute, and see that
 ultimately the plugin does not support it. I'd be fine if this was an
 accident, but doing this by design is a bit evil. So, I'd suggest the
 following, in order to keep the features in Kilo:

 Patches [3, 4] did introduce config flags to control the plugin behavior,
 but it looks like they were not applied correctly; for instance, the
 vlan_transparent case was only applied to ML2. Similarly the MTU config flag
 was not processed server side to ensure that plugins that do not support
 advertisement do not fail silently. This needs to be rectified.
 As for VLAN transparency, we'd need to implement work item 5 (of 6) of spec
 [2], as this extension without at least a backend able to let tagged traffic
 pass doesn't seem right.
 Ensure we sort out the API tests so that we know how the features behave.

 Now granted that controlling the API via config flags is not the best
 solution, as this was always handled through the extension mechanism, but
 since we've been talking about moving away from extension attributes with
 [5], it does sound like a reasonable stop-gap solution.

 Thoughts?
 Armando

 [1]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
 [2]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
 [3]
 https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
 [4]
 https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/nfv-vlan-trunks,n,z
 [5] https://review.openstack.org/#/c/136760/

 On 19 March 2015 at 14:56, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 19 March 2015 at 11:44, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 Just the fact that we did this does not make it right. But I guess that
 we
 are starting to bend the rules. I think that we really need to be far
 more
 diligent about this kind of stuff. Having said that we decided the
 following on IRC:
 1. Mtu will be left in the core (all 

Re: [openstack-dev] [Fuel] development tools

2015-03-20 Thread Przemyslaw Kaminski
It is something different from what I see.

Repos can be called fuel-dev-utils and fuel-vagrant-dev.

P.

On 03/19/2015 09:43 PM, Andrew Woodward wrote:
 we already have a package with the name fuel-utils please see [1]. I
 -1'd the CR over it.
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2015-March/059206.html
 
 On Thu, Mar 19, 2015 at 7:11 AM, Alexander Kislitsky
 akislit...@mirantis.com wrote:
 +1 for moving fuel_development into separate repo.

 On Thu, Mar 19, 2015 at 5:02 PM, Evgeniy L e...@mirantis.com wrote:

 Hi folks,

 I agree, lets create separate repo with its own cores and remove
 fuel_development from fuel-web.

 But in this case I'm not sure if we should merge the patch which
 has links to non-stackforge repositories, because location is going
 to be changed soon.

 Also it will be cool to publish it on pypi.

 Thanks,

 On Thu, Mar 19, 2015 at 4:21 PM, Sebastian Kalinowski
 skalinow...@mirantis.com wrote:

 As I wrote in the review already: I like the idea of merging
 those two tools and making a separate repository. After that
 we could make they more visible in our documentation and wiki
 so they could benefit from being used by broader audience.

 Same for vagrant configuration - if it's useful (and it is
 since newcomers are using them) we could at least move under
 Mirantis organization on Github.

 Best,
 Seabastian


 2015-03-19 13:49 GMT+01:00 Przemyslaw Kaminski pkamin...@mirantis.com:

 Hello,

 Some time ago I wrote some small tools that make Fuel development easier
 and it was suggested to add info about them to the documentation --
 here's the review link [1].

 Evgenyi Li correctly pointed out that we already have something like
 fuel_development already in fuel-web. I think though that we shouldn't
 mix such stuff directly into fuel-web, I mean we recently migrated CLI
 to a separate repo to make fuel-web thinner.

 So a suggestion -- maybe make these tools more official and create
 stackforge repos for them? I think dev ecosystem could benefit by having
 some standard way of dealing with the ISO (for example we get questions
 from people how to apply new openstack.yaml config to the DB).

 At the same time we could get rid of fuel_development and merge that
 into the new repos (it has the useful 'revert' functionality that I
 didn't think of :))

 P.

 [1] https://review.openstack.org/#/c/140355/9/docs/develop/env.rst


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN transparency support

2015-03-20 Thread Gary Kotton
Hi,
So at the moment we have something that is half baked. Say we take the MTU 
support as an example: There is a configuration flag ‘advertise_mtu’ (the 
default value is False) – this is set by an admin, but a tenant can define the 
mtu setting when creating a network.
So by default the tenant setting are ignored.

So I suggest the following:
1. 
https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py#L697

  *   we do a convert_to': convert_to_int (if someone passes any other type 
here it will break the dnsmasq
  *   We add in another validation that checks against the configuration. It 
should throw an exception if the tenant has set an MTU and the admin has not 
set the advertise_mtu flag

We can take the similar approach to the ‘vlan_transparent’ but I have no idea 
what that actually means as part of the API. I am really not in favor of this 
even being in core.

Thanks
Gary

From: Armando M. arma...@gmail.commailto:arma...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, March 20, 2015 at 12:33 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] VLAN transparency support

If my memory does not fail me, changes to the API (new resources, new resource 
attributes or new operations allowed to resources) have always been done 
according to these criteria:

  *   an opt-in approach: this means we know the expected behavior of the 
plugin as someone has coded the plugin in such a way that the API change is 
supported;
  *   an opt-out approach: if the API change does not require explicit backend 
support, and hence can be deemed supported by all plugins.
  *   a 'core' extension (ones available in neutron/extensions) should be 
implemented at least by the reference implementation;

Now, there might have been examples in the past where criteria were not met, 
but these should be seen as exceptions rather than the rule, and as such, fixed 
as defects so that an attribute/resource/operation that is accidentally exposed 
to a plugin will either be honored as expected or an appropriate failure is 
propagated to the user. Bottom line, the server must avoid to fail silently, 
because failing silently is bad for the user.

Now both features [1] and [2] violated the opt-in criterion above: they 
introduced resources attributes in the core models, forcing an undetermined 
behavior on plugins.

I think that keeping [3,4] as is can lead to a poor user experience; IMO it's 
unacceptable to let a user specify the attribute, and see that ultimately the 
plugin does not support it. I'd be fine if this was an accident, but doing this 
by design is a bit evil. So, I'd suggest the following, in order to keep the 
features in Kilo:

 *   Patches [3, 4] did introduce config flags to control the plugin 
behavior, but it looks like they were not applied correctly; for instance, the 
vlan_transparent case was only applied to ML2. Similarly the MTU config flag 
was not processed server side to ensure that plugins that do not support 
advertisement do not fail silently. This needs to be rectified.
 *   As for VLAN transparency, we'd need to implement work item 5 (of 6) of 
spec [2], as this extension without at least a backend able to let tagged 
traffic pass doesn't seem right.
 *   Ensure we sort out the API tests so that we know how the features 
behave.

Now granted that controlling the API via config flags is not the best solution, 
as this was always handled through the extension mechanism, but since we've 
been talking about moving away from extension attributes with [5], it does 
sound like a reasonable stop-gap solution.

Thoughts?
Armando

[1] 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
[2] 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
[3] 
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
[4] 
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/nfv-vlan-trunks,n,z
[5] https://review.openstack.org/#/c/136760/

On 19 March 2015 at 12:01, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:
With regards to the MTU can you please point me to where we validate that the 
MTU defined by the tenant is actually = the supported MTU on the network. I 
did not see this in the code (maybe I missed something).


From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, March 19, 2015 at 8:44 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] VLAN transparency support

Per the other discussion on attributes, I believe the change walks 

Re: [openstack-dev] [designate] Designate performance issues

2015-03-20 Thread stanzgy
Hi Graham, thanks for your suggestion. But in fact the initial import was a
simple while-curl scripts with no concurrency.
With this script, a request will not be sent unless previous one gets
reponse from designate-api. So I think it's not the rate of initial
importing but the number of records that matters.

I have filed this bug with detailed error logs here:
https://bugs.launchpad.net/designate/+bug/1434479

On Thu, Mar 19, 2015 at 9:59 PM, Hayes, Graham graham.ha...@hp.com wrote:


 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/19/2015 07:41 AM, stanzgy wrote:
  Hi all. I have setup kilo designate services with powerdns backend and
 mysql innodb storage in a
 single node.
  The services function well at first. However, after inserting 13k A
 records via API  within 3 domains (5k, 5k, 3k for each), the service
 stops working.
 
  designate-api returns 500 and many RPCs timeout
  designate-central takes 100% cpu, seems trying hard updating domains
 but failed
  designate-mdns also takes 100% cpu, flooded with Including all
 tenants items in query results logs
  powerdns gets timeout errors during AXFR zones
 
  The server doesn't seem to turn any better after suffering in this
 state for hours. What I could do to recover the service is to cleanup
 databases and restart the service.
 
  My question is:
  1. Is it not recommended to create too many records in a single domain?
  2. Any suggestions to improve this situation?
 
  --
  Best Regards,
 
  Zhang Gengyuan
 http://www.hp.com/

  First off, for that initial burst of activity, I would disable debug
 level logging.

 How did you try and add them? Was it via a calling the API as fast as
 possible until it fell over?
 I would recommend rate limiting the initial import if it was.

 Do you have any logs from the services?

 Graham




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,

Zhang Gengyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] PTL elections

2015-03-20 Thread Sergey Lukjanov
Hi John,

Murano isn't official project and so we've started the election process
earlier, you could see dates in the first email in this thread. There was
only one candidate, so, voting itself was bypassed.

till 05:59 UTC March 17, 2015: Open candidacy to PTL positions
March 17, 2015 - 1300 UTC March 24, 2015: PTL elections

The link [1] was an example of how it was going a year ago (April 2014),
probably I've used bad wording :(

The another link in my initial mail specifies the time frame for current
Murano PTL election:

https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty

Thanks.

On Fri, Mar 20, 2015 at 7:01 AM, John Griffith john.griffi...@gmail.com
wrote:



 On Wed, Mar 18, 2015 at 6:59 AM, Serg Melikyan smelik...@mirantis.com
 wrote:

 Thank you!

 On Wed, Mar 18, 2015 at 8:28 AM, Sergey Lukjanov slukja...@mirantis.com
 wrote:

 The PTL candidacy proposal time frame ended and we have only one
 candidate.

 So, Serg Melikyan, my congratulations!

 Results documented in
 https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty#PTL

 On Wed, Mar 11, 2015 at 2:04 AM, Sergey Lukjanov slukja...@mirantis.com
  wrote:

 Hi folks,

 due to the requirement to have officially elected PTL, we're running
 elections for the Murano PTL for Kilo and Liberty cycles. Schedule
 and policies are fully aligned with official OpenStack PTLs elections.

 You can find more info in official elections wiki page [0] and the same
 page for Murano elections [1], additionally some more info in the past
 official nominations opening email [2].

 Timeline:

 till 05:59 UTC March 17, 2015: Open candidacy to PTL positions
 March 17, 2015 - 1300 UTC March 24, 2015: PTL elections

 To announce your candidacy please start a new openstack-dev at
 lists.openstack.org mailing list thread with the following subject:
 [murano] PTL Candidacy.

 [0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
 [1] https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html

 Thank you.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​
 Certainly not disputing/challenging this, but I'm slightly confused; isn't
 the proposal deadline April 4?  You referenced it yourself in the link
 here: [1].  Or is there some special process unique to Murano?


 [1] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
 ​


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] One confirm about max fixed ips per port

2015-03-20 Thread Zou, Yun
Hello, Oleg Bondarev.

Sir, I could not find out any merit of multi subnets on one network, except the 
following one.
 - Migrate IPv4 to IPv6, so we need both subnet range on one network.
So I don't know very much the nesessery of max_fied_ips_per_port parameter.
All I know is only DB module and opencontrail plugin are using this parameter 
for validate.
Do we have any usages about this issue, please?
I appreciate a lot of your help.

My question is related to fix [1].
[1]: https://review.openstack.org/#/c/160214/

Best regards,
Watanabe.isao


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Gnocchi 1.0.0a1 released

2015-03-20 Thread Julien Danjou
Hi there,

Gnocchi 1.0.0a1 is out! You check it out at:

  https://launchpad.net/gnocchi/+milestone/1.0.0a1
https://pypi.python.org/pypi/gnocchi

Happy hacking,

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Common library for shared code

2015-03-20 Thread Jan Provazník

On 03/18/2015 04:22 PM, Ben Nemec wrote:

On 03/17/2015 09:13 AM, Zane Bitter wrote:

On 16/03/15 16:38, Ben Nemec wrote:

On 03/13/2015 05:53 AM, Jan Provaznik wrote:

On 03/10/2015 05:53 PM, James Slagle wrote:

On Mon, Mar 9, 2015 at 4:35 PM, Jan Provazník jprov...@redhat.com wrote:

Hi,
it would make sense to have a library for the code shared by Tuskar UI and
CLI (I mean TripleO CLI - whatever it will be, not tuskarclient which is
just a thing wrapper for Tuskar API). There are various actions which
consist from more that a single API call to an openstack service, to give
some examples:

- nodes registration - for loading a list of nodes from a user defined file,
this means parsing a CSV file and then feeding Ironic with this data
- decommission a resource node - this might consist of disabling
monitoring/health checks on this node, then gracefully shut down the node
- stack breakpoints - setting breakpoints will allow manual
inspection/validation of changes during stack-update, user can then update
nodes one-by-one and trigger rollback if needed


I agree something is needed. In addition to the items above, it's much
of the post deployment steps from devtest_overcloud.sh. I'd like to see that be
consumable from the UI and CLI.

I think we should be aware though that where it makes sense to add things
to os-cloud-config directly, we should just do that.



Yes, actually I think most of the devtest_overcloud content fits
os-cloud-config (and IIRC for this purpose os-cloud-config was created).



It would be nice to have a place (library) where the code could live and
where it could be shared both by web UI and CLI. We already have
os-cloud-config [1] library which focuses on configuring OS cloud after
first installation only (setting endpoints, certificates, flavors...) so not
all shared code fits here. It would make sense to create a new library where
this code could live. This lib could be placed on Stackforge for now and it
might have very similar structure as os-cloud-config.

And most important... what is the best name? Some of ideas were:
- tuskar-common


I agree with Dougal here, -1 on this.


- tripleo-common
- os-cloud-management - I like this one, it's consistent with the
os-cloud-config naming


I'm more or less happy with any of those.

However, If we wanted something to match the os-*-config pattern we might
could go with:
- os-management-config
- os-deployment-config



Well, the scope of this lib will be beyond configuration of a cloud so
having -config in the name is not ideal. Based on feedback in this
thread I tend to go ahead with os-cloud-management and unless someone
rises an objection here now, I'll ask infra team what is the process of
adding the lib to stackforge.


Any particular reason you want to start on stackforge?  If we're going
to be consuming this in TripleO (and it's basically going to be
functionality graduating from incubator) I'd rather just have it in the
openstack namespace.  The overhead of some day having to rename this
project seems unnecessary in this case.


I think the long-term hope for this code is for it to move behind the
Tuskar API, so at this stage the library is mostly to bootstrap that
development to the point where the API is more or less settled. In that
sense stackforge seems like a natural fit, but if folks feel strongly
that it should be part of TripleO (i.e. in the openstack namespace) from
the beginning then there's probably nothing wrong with that either.


So is this eventually going to live in Tuskar?  If so, I would point out
that it's going to be awkward to move it there if it starts out as a
separate thing.  There's no good way I know of to copy code from one git
repo to another without losing its history.

I guess my main thing is that everyone seems to agree we need to do
this, so it's not like we're testing the viability of a new project.
I'd rather put this code in the right place up front than have to mess
around with moving it later.  That said, this is kind of outside my
purview so I don't want to hold things up, I just want to make sure
we've given some thought to where it lives.

-Ben



Hi,
I don't have a strong opinion where this lib should live. James, as 
TripleO PTL, what is your opinion about the lib location?


For now, I set WIP on the patch which adds this lib into Stackforge [1] 
(which I sent shortly before Ben pointed out the concern about its 
location).


Jan

[1] https://review.openstack.org/#/c/165433/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] All Cinder Volume Drivers Must Have A Third Party CI by March 19, 2014

2015-03-20 Thread Erlon Cruz
Mike, for HDS and Hitachi, the contact person is the same:
openstackdevelopm...@hds.com. Also, we have 5 drivers:

- HDS HNAS NFS
- HDS HNAS iSCSI
- HBSD FC
- HBSD iSCSI
- HDS HUS

This last one, HDS HUS, is deprecated by HBSD drivers and won't be
maintained, so you can add it in the removal list.

On Thu, Mar 19, 2015 at 1:36 PM, Mike Perez thin...@gmail.com wrote:

 On Wed, Mar 18, 2015 at 11:38 PM, Bharat Kumar
 bharat.kobag...@redhat.com wrote:
  Regarding the GlusterFS CI:
 
  As I am dealing with end to end CI process of GlusterFS, please modify
 the
  contact person to bharat.kobag...@redhat.com.
 
  Because of this I may miss important announcements from you regarding the
  CI.

 Done.

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

2015-03-20 Thread Erlon Cruz
I agree with John (comment on the mentioned patchset). Also, I expected one
change removing all drivers that does not report on CI. There's one of us
(HUSDriver) that is not maintained and also should be deprecated/removed.

Liu, the best argument here would be a report from your CI in this patch
saying it fails.

On Thu, Mar 19, 2015 at 11:13 PM, liuxinguo liuxin...@huawei.com wrote:

  Hi Mike,



 I have seen the patch at https://review.openstack.org/#/c/165990/ saying
 that huawei driver will be removed because “the maintainer does not have a
 CI reporting to ensure their driver integration is successful”.



 But in fact we really have a CI months ago and it is really reporting to
 reviews, the most resently posts are:‍



 *https://review.openstack.org/#/c/165796/

 Post time:‍ 2015-3-19 0:14:56



 *https://review.openstack.org/#/c/164697/

 Post time: 2015-3-18 23:55:37



 *https://review.openstack.org/164702/

 Post time: 2015-3-18 23:55:37



 *https://review.openstack.org/#/c/152401/

 Post time: 3-18 23:08:45



 And if you want, I will give you more proof of reviews.



 Thanks and regards,

 Liu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack storage configaration file(cinder-service)

2015-03-20 Thread Kamsali, RaghavendraChari (Artesyn)
Hi,

Anyone can help how to configure local.conf file for storage node via cinder 
service in devstack.



Thanks and Regards,
Raghavendrachari kamsali | Software Engineer II  | Embedded Computing
Artesyn Embedded Technologies | 5th Floor, Capella Block, The V, Madhapur| 
Hyderabad, AP 500081 India
T +91-40-66747059 | M +919705762153

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPAM reference driver status and other stuff

2015-03-20 Thread Salvatore Orlando
As pointed out by Pavel in yesterday's meeting, the refactor [1] cannot
assume that the DB transaction for IPAM operations will occur in a scope
different from the one for performing the API operation.
This is because most plugins, including ML2, use inheritance to perform
further operations on database objects, and include the call to super
within a database transaction. These operations might include, for
instance, processing extensions or setting up mappings with backend
resources.

As the IPAM driver is called in NeutronDbPluginV2, this call happens while
another transaction is typically in progress. Initiating a separate session
within the IPAM driver causes the outer transaction to fail.
I do not think there is a lot we can do about this at the moment, unless we
agree on a more pervasive refactoring:
In this case we should:
1) Assume that IPAM enabled methods in NeutronDbPluginV2 get IPAM data from
the caller (the actual plugin class in this case)
This will ensure proper transaction scoping. Perhaps it will make also
the refactoring of NeutronDbPluginV2 slightly cleaner, but will require
explicit support for pluggable IPAM from the plugin.
Also, this will imply that, unless we do some really hackish things,
the ability of retrying on MAC collisions will be lost (that thing does not
really work with REPEATABLE READ anyway).
2) IPAM-enable the ML2 plugin (at least). This means that IPAM calls for
allocating subnets and IPs should happen before the actual DB transaction
for creating ports or subnet is performed (in the case of a subnet we'd
also have a post-commit action for associating the IPAM subnet with a
neutron ID for future retrieval)  I am sure we will get asked to do this as
a part of the mechanism driver framework - but I'm not yet sure about how
to do that. For the time being an explicit call to the IPAM driver might be
acceptable, hopefully.

Otherwise, we'd just made the IPAM driver session aware. This implies
changes to the Pool and Subnet interface to accept an optional session
parameter.
The above works and is probably quicker from an implementation perspective.
However, doing so somehow represents a failure of the pluggable IPAM effort
as total separation between IPAM and API operation processing was one of
our goals. Also, for drivers with a remote backend, remote calls will be
made within a DB transaction, which is another thing we wanted to avoid.

And finally, there is the third option. I know IPAM contributors do not
even want to hear it... but the third option is to enjoy 6 more months to
come up with a better implementation which does not add any technical debt.
In Kilo from the IPAM side we're already introducing subnet pools, so it
won't be a total failure!

Salvatore

[1] https://review.openstack.org/#/c/153236/

On 17 March 2015 at 15:32, Salvatore Orlando sorla...@nicira.com wrote:



 On 17 March 2015 at 14:44, Carl Baldwin c...@ecbaldwin.net wrote:


 On Mar 15, 2015 6:42 PM, Salvatore Orlando
  * the ML2 plugin overrides several methods from the base db class.
 From what I gather from unit tests results, we have not yet refactored it.
 I think to provide users something usable in Kilo we should ensure the ML2
 plugin at least works with the IPAM driver.

 Yes, agreed.

  * the current refactoring has ipam-driver-enabled and
 non-ipam-driver-enabled version of some API operations. While this the less
 ugly way to introduce the driver and keeping at the same time the old
 logic, it adds quite a bit of code duplication. I wonder if there is any
 effort we can make without too much yak shaving to reduce that code
 duplication, because in this conditions I suspect it would a hard sell to
 the Neutron core team.

 This is a good thing to bring up.  It is a difficult trade off.  On one
 hand, the way it has been done makes it easy to review and see that the
 existing implementation has not been disturbed reducing the short term
 risk.  On the other hand, if left the way it is indefinitely, it will be a
 maintenance burden.  Given the current timing, could we take a two-phased
 approach?  First, merge it with duplication and immediately create a follow
 on patch to deduplicate the code to merge when that is ready?

 The problem with duplication is that it will make maintenance troubling.
 For instance if a bug is found in _test_fixed_ips the bug fixer will have
 to know that the same fix must be applied to _test_fixed_ips_for_ipam as
 well. I'm not sure we can ask contributors to fix bugs in two places. But
 if we plan to deduplicate with a follow-up patch I am on board. I know we'd
 have the cycles for that.
 Said that, the decision lies with the rest of core team (Carl's and mine
 votes do not count here!). If I were a reviewer I'd evaluate the tradeoff
 between of the benefits brought buythis new feature, the risks of the
 refactoring (which, as you say, are rather low), and the maintenance burden
 (aka technical debt) it introduces.

 I'm kind of sure the PTL would 

Re: [openstack-dev] [designate] Designate performance issues

2015-03-20 Thread stanzgy
Hi vinod, thanks for you reply. I have report a bug with related log
snippets here:

https://bugs.launchpad.net/designate/+bug/1434479

On Thu, Mar 19, 2015 at 10:11 PM, Vinod Mangalpally 
vinod.m...@rackspace.com wrote:

  Hi Zhang,

  Thank you for reporting the bug. The number of records does not seem too
 high. At this point I do not have a suggestion to improve the situation,
 but I will investigate this. Could you file a bug report? Relevant log
 snippets would also be helpful.

  --vinod

   From: stanzgy stan@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, March 19, 2015 2:39 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [designate] Designate performance issues

   Hi all. I have setup kilo designate services with powerdns backend
 and mysql innodb storage in a single node.
  The services function well at first. However, after inserting 13k A
 records via API  within 3 domains (5k, 5k, 3k for each), the service stops
 working.

 designate-api returns 500 and many RPCs timeout
  designate-central takes 100% cpu, seems trying hard updating domains but
 failed
  designate-mdns also takes 100% cpu, flooded with Including all tenants
 items in query results logs
  powerdns gets timeout errors during AXFR zones

  The server doesn't seem to turn any better after suffering in this state
 for hours. What I could do to recover the service is to cleanup databases
 and restart the service.

  My question is:
  1. Is it not recommended to create too many records in a single domain?
  2. Any suggestions to improve this situation?

  --
   Best Regards,

 Zhang Gengyuan

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,

Zhang Gengyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-20 Thread Ian Wells
On 20 March 2015 at 15:49, Salvatore Orlando sorla...@nicira.com wrote:

 The MTU issue has been a long-standing problem for neutron users. What
 this extension is doing is simply, in my opinion, enabling API control over
 an aspect users were dealing with previously through custom made scripts.


Actually, version 1 is not even doing that; it's simply telling the user
what happened, which the user has never previously been able to tell, and
configuring the network consistently.  I don't think we implemented the
'choose an MTU' API, we're simply telling you the MTU you got.

Since this is frequently smaller than you think (there are some
non-standard features that mean you frequently *can* pass larger packets
than should really work, hiding the problem at the cost of a performance
penalty for doing it) and there was previously no way of getting any idea
of what it is, this is a big step forward.

And to reiterate, because this point is often missed: different networks in
Neutron have different MTUs.  My virtual networks might be 1450.  My
external network might be 1500.  The provider network to my NFS server
might be 9000.  There is *nothing* in today's Neutron that lets you do
anything about that, and - since Neutron routers and Neutron DHCP agents
have no means of dealing with different MTU networks - really strange
things happen if you try some sort of workaround.

If a plugin does not support specifically setting the MTU parameter, I
 would raise a 500 NotImplemented error. This will probably create a
 precedent, but as I also stated in the past, I tend to believe this might
 actually be better than the hide  seek game we do with extension.


I am totally happy with this, if we agree it's what we want to do, and it
makes plenty of sense for when you request an MTU.

The other half of the interface is when you don't request a specific MTU
but you'd like to know what MTU you got - the approach we have today is
that if the MTU can't be determined (either a plugin with no support or one
that's short on information) then the value on the network object is
unset.  I assume people are OK with that.


 The vlan_transparent feature serves a specific purpose of a class of
 applications - NFV apps.


To be pedantic - the uses for it are few and far between but I wouldn't
reduce it to 'NFV apps'.  http://virl.cisco.com/ I wrote on Openstack a
couple of years ago and it's network simulation but not actually NFV.
People implementing resold services (...aaS) in VMs would quite like VLANs
on their virtual networks too, and this has been discussed in at least 3
summits so far.  I'm sure other people can come up with creative reasons.

It has been speculated during the review process whether this was actually
 a provider network attribute.


Which it isn't, just for reference.


 In theory it is something that characterises how the network should be
 implemented in the backend.

However it was not possible to make this ad admin attribute because also
 non-admins might require a vlan_transparent network. Proper RBAC might
 allow us to expose this attribute only to a specific class of users, but
 Neutron does not yet have RBAC [1]


I think it's a little early to worry about restricting the flag.  The
default implementation pretty much returns a constant (and says if that
constant is true when you'd like it to be) - it's implemented as a call for
future expansion.

Because of its nature vlan_transparent is an attribute that probably
 several plugins will not be able to understand.


And again backward compatibility is documented, and actually pretty
horrible now I come to reread it, so if we wanted to go with a 500 as above
that's quite reasonable.


 Regardless of what the community decides regardless extensions vs
 non-extension, the code as it is implies that this flag is present in every
 request - defaulting to False.


Which is, in fact, not correct (or at least not the way it's supposed to
be, anyway; I need to check the code).

The original idea was that if it's not present in the request then you
can't assume the network you're returned is a VLAN trunk, but you also
can't assume it isn't - as in, it's the same as the current behaviour,
where the plugin does what it does and you get to put up with the results.
The difference is that the plugin now gets to tell you what it's done.


 This can lead to somewhat confusing situation, because users can set it to
 True, and a get a 200 response. As a user, I would think that Neutron has
 prepared for me a nice network which is vlan transparent... but if Neutron
 is running any plugin which does not support this extension I would be in a
 for a huge disappointment when I discover my network is not vlan
 transparent at all!


The spec has detail on how the user works this out, as I say.
Unfortunately it's not by return code

I reckon that perhaps, as a short term measure, the configuration flag
 Armando mentioned might be used to obscure completely the API attribute 

Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Tidwell, Ryan
Great suggestion Kevin.  Passing 0.0.0.1 as gateway_ip_template (or whatever 
you call it) is essentially passing an address index, so when you OR 0.0.0.1 
with the CIDR you get your gateway set as the first usable IP in the subnet.  
The intent of the user is to allocate the first usable IP address in the subnet 
to the gateway.  The wildcard notation for gateway IP is really a more 
convoluted way of expressing this intent.  Something like address_index is a 
little more explicit in my mind.  I think Kevin is on to something.

-Ryan

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Friday, March 20, 2015 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api][neutron] Best API for generating subnets 
from pool

What if we just call it 'address_index' and make it an integer representing the 
offset from the network start address?

On Fri, Mar 20, 2015 at 12:39 PM, Carl Baldwin 
c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:
On Fri, Mar 20, 2015 at 1:34 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
 How is 0.0.0.1 a host address? That isn't a valid IP address, AFAIK.

It isn't a valid *IP* address without the network part.  However, it
can be referred to as the host address on the network or the host
part of the IP address.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] CI outage

2015-03-20 Thread Dan Prince
Short version:

The RH1 CI region has been down since yesterday afternoon.

We have a misbehaving switch and have file a support ticket with the
vendor to troubleshoot things further. We hope to know more this
weekend, or Monday at the latest.

Long version:

Yesterday afternoon we started seeing issues in scheduling jobs on the
RH1 CI cloud. We haven't made any OpenStack configuration changes
recently, and things have been quite stable for some time now (our
uptime was 365 days on the controller).

Initially we found a misconfigured Keystone URL which was preventing
some diagnostic queries via OS clients external to the rack. This
setting hadn't been recently changed however and didn't seem to bother
nodepool before so I don't think it is the cause of the outage...

MySQL also got a bounce. It seemed happy enough after a restart as well.

After fixing the keystone setting and bouncing MySQL instances appears
to go ACTIVE but we were still having connectivity issues getting
floating IPs and DHCP working on overcloud instances. After a good bit
of debugging we started looking at the switches. Turns out one of them
has a high CPU usuage (above the warning threshold) and MAC address are
also unstable (ports are moving around).

Until this is resolved RH1 is unavailable to host jobs CI jobs. Will
post back here with an update once we have more information.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Salvatore Orlando
If we feel a need for specifying the relative position of gateway address
and allocation pools when creating a subnet from a pool which will pick a
CIDR from its prefixes, then the integer value solution is probably
marginally better than the fake IP one (eg.: 0.0.0.1 to say the gateway
is the first IP). Technically they're equivalent - and one could claim that
the address-like notation is nothing bug and octet based representation of
a number.

I wonder why a user would ask for a random CIDR with a given prefix, and
then mandate that gateway IP and allocation pools are in precise locations
within this randomly chosen CIDR. I guess there are good reasons I cannot
figure out by myself. In my opinion all that counts here is that the
semantics of a resource attribute should be the same in the request and the
response. For instance, one should not have gateway_ip as a relative
counter-like IP in the request body and then as an actual IP address in
the response object.

Salvatore

On 21 March 2015 at 00:08, Tidwell, Ryan ryan.tidw...@hp.com wrote:

  Great suggestion Kevin.  Passing 0.0.0.1 as gateway_ip_template (or
 whatever you call it) is essentially passing an address index, so when you
 OR 0.0.0.1 with the CIDR you get your gateway set as the first usable IP in
 the subnet.  The intent of the user is to allocate the first usable IP
 address in the subnet to the gateway.  The wildcard notation for gateway IP
 is really a more convoluted way of expressing this intent.  Something like
 address_index is a little more explicit in my mind.  I think Kevin is on to
 something.



 -Ryan



 *From:* Kevin Benton [mailto:blak...@gmail.com]
 *Sent:* Friday, March 20, 2015 2:34 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [api][neutron] Best API for generating
 subnets from pool



 What if we just call it 'address_index' and make it an integer
 representing the offset from the network start address?



 On Fri, Mar 20, 2015 at 12:39 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 On Fri, Mar 20, 2015 at 1:34 PM, Jay Pipes jaypi...@gmail.com wrote:
  How is 0.0.0.1 a host address? That isn't a valid IP address, AFAIK.

 It isn't a valid *IP* address without the network part.  However, it
 can be referred to as the host address on the network or the host
 part of the IP address.

 Carl


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --

 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Liberty Spec Proposals now being accepted

2015-03-20 Thread Morgan Fainberg
Hi everyone,

This is a quick note that Spec proposals are now open for Keystone for the
Liberty cycle. By open, this means that specs can be proposed against
Liberty with the intention to kick-start the review cycle on specs for the
next cycle a bit earlier and help prevent piling all the new features into
the 3rd milestone for the Liberty cycle. The early acceptance doesn't mean
that there will be a lot of activity on the specs review wise until Kilo is
a bit closer to release.

If any authors have previously approved specs (from Juno or Kilo) feel free
to tag the new proposed spec with 'previously-approved: release in the
commit. This will help the review team identify specs that should be
prioritized for early review.

Some general guidelines (the PTL for Liberty may decide to tweak any items
listed here):

Previously approved specs
=

Copy your spec from the 'specs/oldrelease' directory to the
'specs/liberty/' directory and note if anything has been partially
implemented and what is still required for the next cycle.

Reviewers will still do full review of the spec, there will be no
rubberstamping of previously approved specs.

New Proposals


Everything is the same as for the Kilo cycle.

Keystone Middlware and Keystoneclient
===

Specifications for middleware and client are handled as they were in the
Kilo cycle. Since middleware and client are not released in the same manner
as the keystone server, they can continue to be proposed and approved
regardless of the Feature-Freeze state.

Thanks!
--Morgan Fainberg

(If this email looks a lot like the nova email from Michael Still, It is
because I shamelessly decided to use a lot of the same concepts/verbiage...
imitation is the highest form of flattery, or so I'm told; I assure you
this email is meant for Keystone. Thanks for an awesome email I could
imitate Michael!).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Re-schedule Puppet OpenStack meeting

2015-03-20 Thread Colleen Murphy
A couple more responses were submitted but 1500 on Tuesdays is still in
first place. Since there was no further discussion, we'll meet next week at
1500 on Tuesday in #openstack-meeting-4 (no Monday meeting). I've edited
the meetings wiki to indicate this. If you have topics to discuss at the
meeting, please add them to the agenda here:
https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack#Agenda

Colleen (crinkle)

On Mon, Mar 16, 2015 at 8:04 AM, Colleen Murphy coll...@puppetlabs.com
wrote:

 Based on the results of https://doodle.com/wuhsuafq5tibzugg we have two
 contenders, in 1st place is 1500 on Tuesdays and in 2nd is 1600 on
 Thursdays. 1500 Tuesdays has an opening in #openstack-meeting-4 and 1600
 Thursdays has openings in #openstack-meeting and #openstack-meeting-3.

 Since 1500 on Tuesdays has more votes it makes sense to book that time in
 #openstack-meeting-4. If there are objections or concerns please voice them
 here.

 Colleen (crinkle)

 On Thu, Mar 12, 2015 at 1:51 PM, Colleen Murphy coll...@puppetlabs.com
 wrote:



  Forwarded Message 
 Subject: [openstack-dev] [puppet] Re-schedule Puppet OpenStack meeting
 Date: Thu, 12 Mar 2015 15:49:13 -0400
 From: Emilien Macchi emil...@redhat.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org

 Hi everyone,

 Since OPS midcycle discussions, it seems we need to reschedule the
 meeting again.

 The proposed times were based on what the folks at the midcycle said
 seemed reasonable for them, but I understand that there were plenty of us
 who were not there who would still want to participate and have their
 availability considered. Perhaps if there are people for whom none of the
 proposed slots work, they can comment here or fill out the poll with none
 of the boxes checked? Obviously we can't make it work for absolutely
 everyone, but I'm hoping not to exclude too many people.

 Colleen



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Request exemption for removal of NetApp FC drivers (no voting CI)

2015-03-20 Thread liuxinguo
Hi Mike,

I think what we are talking is huawei-volume-ci, not huawei-ci. It is 
huawei-volume-ci that is on behalf of huawei 18000 iSCSI and huawei 18000 FC 
driver.
Regarding to only report failures when a patch really does break your 
integration, I think huawei-volume-ci probaly should be marked as not 
stable, but not not reported. And have a look at all the other CI's report, 
I think some of them are really not stable too.
I do not understand why huawei-volume-ci is marked as not reported.

The server of review.openstack.org is located at the United States (U.S.) and 
there is really a network problem between our CI and the review server. 
Till now we are really working hard for this and our CI will be moved to a more 
stable network soon.

Mike, will you please have a consider about this? Thanks very much!

Thanks and best regards,
Liu

-邮件原件-
发件人: Mike Perez [mailto:thin...@gmail.com] 
发送时间: 2015年3月21日 6:37
收件人: OpenStack Development Mailing List (not for usage questions)
抄送: Fanyaohong
主题: Re: [openstack-dev] [cinder] Request exemption for removal of NetApp FC 
drivers (no voting CI)

On 21:53 Fri 20 Mar , Rochelle Grober wrote:
 Ditto for Huawei.  
 
 While we are not *reliably* reporting, we are reporting and the 
 necessary steps have already been taken (and more importantly, 
 approved) to get this reliably working ASAP.
 
 We respectfully request the same consideration for our cinder drivers.

The most important piece of a CI meeting the requirements is that the test pass 
with your storage solution configured in Cinder, and to only report failures 
when a patch really does break your integration. Otherwise, there is no point. 
So far, the times Huawei-ci has reported have been false failures [1].

[1] - 
https://review.openstack.org/#/q/reviewer:+huawei-ci+project:openstack/cinder,n,z

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-20 Thread Rob Pothier (rpothier)

The MTU values are derived from the config values only.
If the tenant tries to set the MTU directly, that is rejected.

Rob

From: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, March 19, 2015 at 3:01 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] VLAN transparency support

With regards to the MTU can you please point me to where we validate that the 
MTU defined by the tenant is actually = the supported MTU on the network. I 
did not see this in the code (maybe I missed something).


From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, March 19, 2015 at 8:44 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] VLAN transparency support

Per the other discussion on attributes, I believe the change walks in 
historical footsteps and it's a matter of project policy choice.  That aside, 
you raised a couple of other issues on IRC:

- backward compatibility with plugins that haven't adapted their API - this is 
addressed in the spec, which should have been implemented in the patches 
(otherwise I will downvote the patch myself) - behaviour should be as before 
with the additional feature that you can now tell more about what the plugin is 
thinking
- whether they should be core or an extension - this is a more personal 
opinion, but on the grounds that all networks are either trunks or not, and all 
networks have MTUs, I think they do want to be core.  I would like to see 
plugin developers strongly encouraged to consider what they can do on both 
elements, whereas an extension tends to sideline functionality from view so 
that plugin writers don't even know it's there for consideration.

Aside from that, I'd like to emphasise the value of these patches, so hopefully 
we can find a way to get them in in some form in this cycle.  I admit I'm 
interested in them because they make it easier to do NFV.  But they also help 
normal cloud users and operators, who otherwise have to do some really strange 
things [1].  I think it's maybe a little unfair to post reversion patches 
before discussion, particularly when the patch works, passes tests and 
implements an approved spec correctly.
--
Ian.
[1] 
https://bugzilla.redhat.com/show_bug.cgi?id=1138958https://urldefense.proofpoint.com/v2/url?u=https-3A__bugzilla.redhat.com_show-5Fbug.cgi-3Fid-3D1138958d=AwMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=NzYY0bOpToH9ZNwzqI_SpQHiPFRXD_nfb1bM3qAw7Css=FlF57GYJqeWgx5ivxnK5kfWlyTIc1ZFbdlXoi2cfdhwe=
 (admittedly first link I found, but there's no shortage of them)

On 19 March 2015 at 05:32, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:
Hi,
This patch has the same addition too - 
https://review.openstack.org/#/c/154921/. We should also revert that one.
Thanks
Gary

From: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, March 19, 2015 at 1:14 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] VLAN transparency support

Hi,
It appears that https://review.openstack.org/#/c/158420/ update the base 
attributes for the networks. Is there any reason why this was not added as a 
separate extension like all others.
I do not think that this is the correct way to go and we should do this as all 
other extensions have been maintained. I have posted a revert 
(https://review.openstack.org/#/c/165776/) – please feel free to knack if it is 
invalid.
Thanks
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] FFE backport IBP-reconnect to 6.0

2015-03-20 Thread Alexander Gordeev
Hello Fuelers,

I'm kindly insisting on merging the following 2 patches into 6.0

[1] https://review.openstack.org/#/c/161721/
[2] https://review.openstack.org/#/c/161722/

These patches are going to implement IBP-reconnect [3].
Additionally, it closes one of customer-found bug [4] related to that.

Patch [1] introduces new dependency for fuel-agent, but this
dependency is already presented in our repos and the package should be
already installed in our bootstrap image just because network checker
needs it.
Patch [2] just adds reconnecting routine to fuel-agent itself.

All changes are related to fuel-agent only, which's installed into
bootstrap image.
Those patches don't affect our patching/upgrades.

I can't see any reasons to not merge them. Even IBP was inlcluded
under experimental status in 6.0, we have to support it. This fix is
essential.
I knew that we have strict policy to prohibit the backporting of fixes
for experimental features.
So, I'm asking for making an exception for IBP. All related fixes are
pretty small and totally harmless.

Let me know guys if you still have objections to not merge it.
Otherwise, let's merge them until it's too late speaking in terms of
our release cycle.


[1] https://review.openstack.org/#/c/161721/
[2] https://review.openstack.org/#/c/161722/
[3] https://blueprints.launchpad.net/fuel/+spec/ibp-reconnect
[4] https://bugs.launchpad.net/fuel/+bug/1389120

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

2015-03-20 Thread Jay S. Bryant

Mike,

Looks like this removal may have been a mistake.  We should readdress.

Jay


On 03/20/2015 05:59 AM, Erlon Cruz wrote:
I agree with John (comment on the mentioned patchset). Also, I 
expected one change removing all drivers that does not report on CI. 
There's one of us (HUSDriver) that is not maintained and also should 
be deprecated/removed.


Liu, the best argument here would be a report from your CI in this 
patch saying it fails.


On Thu, Mar 19, 2015 at 11:13 PM, liuxinguo liuxin...@huawei.com 
mailto:liuxin...@huawei.com wrote:


Hi Mike,

I have seen the patch at https://review.openstack.org/#/c/165990/
saying that huawei driver will be removed because “the maintainer
does not have a CI reporting to ensure their driver integration is
successful”.

But in fact we really have a CI months ago and it is really
reporting to reviews, the most resently posts are:‍

*https://review.openstack.org/#/c/165796/

Post time:‍2015-3-19 0:14:56

*https://review.openstack.org/#/c/164697/

Post time: 2015-3-18 23:55:37

*https://review.openstack.org/164702/

Post time: 2015-3-18 23:55:37

*https://review.openstack.org/#/c/152401/

Post time: 3-18 23:08:45

And if you want, I will give you more proof of reviews.

Thanks and regards,

Liu


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] canceling meeting

2015-03-20 Thread Susanne Balle
Make sense to me. Susanne

On Thu, Mar 19, 2015 at 5:49 PM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 Hi lbaas'ers,

 Now that lbaasv2 has shipped, the need for a regular weekly meeting is
 greatly reduced. I propose that we cancel the regular meeting, and discuss
 neutron-y things during the neutron on-demand agenda, and octavia things in
 the already existing octavia meetings.

 Any objections/alternatives?

 Thanks,
 doug



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][horizon] Need help at debugging requirements issue

2015-03-20 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/20/2015 09:01 AM, Matthias Runge wrote:
 On 19/03/15 15:52, Ihar Hrachyshka wrote:
 
 [1] https://review.openstack.org/#/c/155353/
 
 
 Hi,
 
 it all comes to the fact that DEVSTACK_GATE_INSTALL_TESTONLY=1 is
 not specified in the requirements integration job. I think you
 need to set it at [1]. In that case, your test requirements will
 also be installed during the job.
 
 Thanks Ihar,
 
 the issue is, the only change was, to remove the upper boundary
 from
 
 Django=1.4.2,1.7
 
 And removing 1.7 from that line resulted in not installing 
 django-nose any more.
 

The code from the new django version may behave differently, requiring
nose plugin installed while the old version didn't. If that's the
case, the real problem is probably that django does not require the
plugin in their requirements.txt. That said, you may still workaround
it with DEVSTACK_GATE_INSTALL_TESTONLY=1.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVDBVGAAoJEC5aWaUY1u57p6MIAIHHYg30nXEQddQVAFFaQXcV
4IloryE+tAxftlN2JKz2KmsGBrO3KL/Iv9xA+oylqc/rZaq33I77lbm5iZoF10vf
ZBdHyJQ9b4rK9beiOIqdk6H7C0NEYYscCw5vMfZaI1UwQxo2XXkFunLTwIImyUq7
jI2KyT7oAhgIesZswnfu63Eqa8aETRhqguW57svh6GKChC+cMIBSs8VMQuPWWF14
wLSfmCYFtQWDcPXKYPB7BakZXilpLPzu/lksPmE4tj6KLi2UHCaJXJlW784ZKWEW
gD9AGtKZVL6+mCDVWd7iBCetQXU6unSyoX+VSE6GWjcPHsI/m+w1854z4SQbNcY=
=C9Fh
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Common library for shared code

2015-03-20 Thread James Slagle
On Fri, Mar 20, 2015 at 7:20 AM, Jan Provazník jprov...@redhat.com wrote:
 On 03/18/2015 04:22 PM, Ben Nemec wrote:
 So is this eventually going to live in Tuskar?  If so, I would point out
 that it's going to be awkward to move it there if it starts out as a
 separate thing.  There's no good way I know of to copy code from one git
 repo to another without losing its history.

 I guess my main thing is that everyone seems to agree we need to do
 this, so it's not like we're testing the viability of a new project.
 I'd rather put this code in the right place up front than have to mess
 around with moving it later.  That said, this is kind of outside my
 purview so I don't want to hold things up, I just want to make sure
 we've given some thought to where it lives.

 -Ben


 Hi,
 I don't have a strong opinion where this lib should live. James, as TripleO
 PTL, what is your opinion about the lib location?

 For now, I set WIP on the patch which adds this lib into Stackforge [1]
 (which I sent shortly before Ben pointed out the concern about its
 location).

 Jan

 [1] https://review.openstack.org/#/c/165433/

I'd say just propose it under openstack/ for all the afore mentioned
reasons. We might as well start when we intend to end up, especially
since Tuskar API is already under openstack/. If anyone ends up
surfacing any objections, that will become known on the review.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd release plans

2015-03-20 Thread Imre Farkas

Hi Dmitry,

Sounds good to me! ;-)

Imre


On 03/20/2015 01:59 PM, Dmitry Tantsur wrote:

This is an informational email about upcoming ironic-discoverd-1.1.0
[1]. If you're not interested in discoverd, you may safely skip it.


Hi all!

Do you know what time is coming? Release time! I'm hoping to align this
ironic-discoverd release with the OpenStack one. Here's proposed plan,
which will be in effect, unless someone disagrees:

Apr 9: feature freeze. The goal is to leave me some time to test it with
Ironic RC and in-progress devstack integration [2]. Between this point
and the release day, git master can be considered a release candidate :)

Apr 30: release and celebration. stable/1.1 is branched and master is
opened for features.

For better scoping I've untargeted everything from 1.1.0 milestone [1],
except for thing I see as particularly important. We might add more if
we have time before FF.

Please let me know what you think.
Cheers,
  Dmitry

[1] https://launchpad.net/ironic-discoverd/+milestone/1.1.0
[2] https://etherpad.openstack.org/p/DiscoverdDevStack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Request exemption for removal of NetApp FC drivers (no voting CI)

2015-03-20 Thread ClaytonLuce, Timothy
I'd like to point out that for NetApp FC drivers NetApp has been in discussions 
and updating progress on these drivers since their submission.



I will point out a discussion in the Nov Core meeting where I brought up the 
challenge around FC environments and the response I received:



16:10:44 timcl K2 for existing drivers only? What about the new drivers 
coming in? K2 is going to be a challenge especially with Fibre Channel

16:10:46 DuncanT_ thingee, deprecation or removal... I'll probably put the 
patches up for removal then convert them to deprecation 16:10:48 jungleboyj 
DuncanT_: So the expectation is that maintainers are reliably reportng CI 
results by K-2 ?

16:11:04 DuncanT_ jungleboyj, For exisiting drivers, yes

16:11:10 jungleboyj Ok.

16:11:38 DuncanT_ timcl, New drivers maybe target the end of the release? 
With a hard cutoff of L-2

16:11:44 thingee Since I know not everyone attends this meeting 
unfortunately, I think DuncanT_ should also post this to the list. 16:12:09 
DuncanT_ thingee, Will do. I'll email maintainers directly where possible too

16:12:29 thingee anyone opposed to this, besides there being more work for 
you? :)

16:12:30 timcl DuncanT_: OK we'll digest that and see where we are in the FC 
side

16:12:53 DuncanT_ timcl, Cool. Reach out to me if there are major issues, we 
can work on them.

16:13:14 DuncanT_ Ok, I think that's me done for this topic. Thanks all

16:13:17 timcl DuncanT_: thx



NetApp has in good faith been working toward implementing a CI for FC, I won't 
go into the challenges of spending $$ for lab equipment to build out a scalable 
quality CI system but suffice it to say the lab equipment is on order and 
scheduled for arrival the first part of April, at which point we can put in 
place the CI for FC.



NetApp has been very forthcoming in our progress and have gotten all our other 
CI systems in place for 7-mode iSCSI/NFS, cDOT iSCSI/NFS and E-Series.



I respectfully request that NetApp FC be removed from this list of drivers to 
be removed for Kilo and placed back in the releaes and we can negotiate an 
agreed upon time as to when the CI system for these drivers will be in place.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd release plans

2015-03-20 Thread Dmitry Tantsur
This is an informational email about upcoming ironic-discoverd-1.1.0 
[1]. If you're not interested in discoverd, you may safely skip it.



Hi all!

Do you know what time is coming? Release time! I'm hoping to align this 
ironic-discoverd release with the OpenStack one. Here's proposed plan, 
which will be in effect, unless someone disagrees:


Apr 9: feature freeze. The goal is to leave me some time to test it with 
Ironic RC and in-progress devstack integration [2]. Between this point 
and the release day, git master can be considered a release candidate :)


Apr 30: release and celebration. stable/1.1 is branched and master is 
opened for features.


For better scoping I've untargeted everything from 1.1.0 milestone [1], 
except for thing I see as particularly important. We might add more if 
we have time before FF.


Please let me know what you think.
Cheers,
 Dmitry

[1] https://launchpad.net/ironic-discoverd/+milestone/1.1.0
[2] https://etherpad.openstack.org/p/DiscoverdDevStack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-20 Thread Ian Wells
I agree with a lot of your desires, but not your reasoning as to why the
changes are problematic.  Also, your statements about API changes are
pretty reasonable and probably want writing down somewhere so that future
specs are evaluated against them.

Firstly, if we add to the core Neutron interface it's clear that old
plugins have to behave sensibly.  If we add an MTU attribute or a VLAN
attribute and we allow the user to tinker with it (in the case of MTU
that's a future plan, but in the VLAN case that's true now) then there has
to be a backward compatibility plan - we can't have plugins appearing as if
they support the functionality when they actually ignored the request.  The
specs are quite explicit about that and how to make sure it doesn't work
that way, so your criticism about not correctly implementing the opt-in
approach does not apply.  They're not aiming for an opt-in approach but an
opt-out one; they're aiming to add an attribute and ensure that unaware,
unchanged plugins respect the proposed API behaviour.

That being the case, neither the MTU plugin not the VLAN plugin require
support from the driver or plugin to work.  If the current patches do not
implement the model, and in particular if they work as you describe, then
they are indeed broken (not to spec, specifically) and need to be fixed.
The specs are quite clear about what should happen in those cases, and what
you're describing is a bug in the code, not a flaw in the proposed design -
I'll get someone to take a look, but can you file it so that it doesn't get
lost, along with a way to trigger it?

I think that, in these circumstances, the intended behaviour is logical and
in line with what you want to see.  If you are still interested in either
or both of them being extensions rather than core attributes, then that's
fine and something we should discuss further, but I don't think what you've
said here is justification.
-- 
Ian.

On 19 March 2015 at 16:23, Armando M. arma...@gmail.com wrote:

 Forwarding my reply to the other thread here:

 

 If my memory does not fail me, changes to the API (new resources, new
 resource attributes or new operations allowed to resources) have always
 been done according to these criteria:

- an opt-in approach: this means we know the expected behavior of the
plugin as someone has coded the plugin in such a way that the API change is
supported;
- an opt-out approach: if the API change does not require explicit
backend support, and hence can be deemed supported by all plugins.
- a 'core' extension (ones available in neutron/extensions) should be
implemented at least by the reference implementation;

 Now, there might have been examples in the past where criteria were not
 met, but these should be seen as exceptions rather than the rule, and as
 such, fixed as defects so that an attribute/resource/operation that is
 accidentally exposed to a plugin will either be honored as expected or an
 appropriate failure is propagated to the user. Bottom line, the server must
 avoid to fail silently, because failing silently is bad for the user.

 Now both features [1] and [2] violated the opt-in criterion above: they
 introduced resources attributes in the core models, forcing an undetermined
 behavior on plugins.

 I think that keeping [3,4] as is can lead to a poor user experience; IMO
 it's unacceptable to let a user specify the attribute, and see that
 ultimately the plugin does not support it. I'd be fine if this was an
 accident, but doing this by design is a bit evil. So, I'd suggest the
 following, in order to keep the features in Kilo:

- Patches [3, 4] did introduce config flags to control the plugin
behavior, but it looks like they were not applied correctly; for instance,
the vlan_transparent case was only applied to ML2. Similarly the MTU config
flag was not processed server side to ensure that plugins that do not
support advertisement do not fail silently. This needs to be rectified.
- As for VLAN transparency, we'd need to implement work item 5 (of 6)
of spec [2], as this extension without at least a backend able to let
tagged traffic pass doesn't seem right.
- Ensure we sort out the API tests so that we know how the features
behave.

 Now granted that controlling the API via config flags is not the best
 solution, as this was always handled through the extension mechanism, but
 since we've been talking about moving away from extension attributes with
 [5], it does sound like a reasonable stop-gap solution.

 Thoughts?
 Armando

 [1]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
 [2]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
 [3]
 https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
 [4]
 

Re: [openstack-dev] Stable/icehouse: oslo.messaging RPCClient segmentation fault core dumped

2015-03-20 Thread ZIBA Romain
Hello,
For information, I solved my problem by uninstalling the package librabbitmq 
which is not installed in the stable icehouse release of Openstack RDO.
This library causes my problem.

Best regards,
Romain Ziba.

De : ZIBA Romain
Envoyé : jeudi 19 mars 2015 18:11
À : 'openstack-dev@lists.openstack.org'
Objet : RE: Stable/icehouse: oslo.messaging RPCClient segmentation fault core 
dumped

Hello everyone,

I have dug into this problem and I realized that this piece of code is working 
on a CentOS 6.6 running Openstack stable icehouse installed with RDO.
My guess is that there may be an issue either with the operating system or with 
the devstack installation.

If you have any clue, please let me know.

Thanks  best regards,
Romain Ziba.

De : ZIBA Romain
Envoyé : mercredi 18 mars 2015 13:07
À : openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Objet : Stable/icehouse: oslo.messaging RPCClient segmentation fault core dumped

Hello everyone,
I am having an issue using the RPCClient of the oslo.messaging package 
delivered through the stable/icehouse release of devstack (v 1.4.1).

With this simple script:


import sys

from oslo.config import cfg
from oslo import messaging

from project.openstack.common import log

LOG = log.getLogger(__name__)

log_levels = (cfg.CONF.default_log_levels +
['stevedore=INFO', 'keystoneclient=INFO'])
cfg.set_defaults(log.log_opts, default_log_levels=log_levels)

argv = sys.argv
cfg.CONF(argv[1:], project='test_rpc_server')

log.setup('test_rpc_server')

transport_url = 'rabbit://guest:guest@localhost:5672/'
transport = messaging.get_transport(cfg.CONF, transport_url)
target = messaging.Target(topic='test_rpc', server='server1')
client = messaging.RPCClient(transport, target)
ctxt = {'some':'context'}
try:
res = client.call(ctxt, 'call_method_1')
except Exception as e:
LOG.debug(e)
print res


svcdev@svcdev-openstack: python rpc_client.py
2015-03-18 11:44:01.018 15967 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on localhost:5672
2015-03-18 11:44:01.125 15967 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on localhost:5672
2015-03-18 11:44:01.134 15967 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on localhost:5672
2015-03-18 11:44:01.169 15967 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on localhost:5672
Segmentation fault (core dumped)

The last Python method called is the following one (in librabbitmq package, v 
1.0.3):

def basic_publish(self, body, exchange='', routing_key='',
mandatory=False, immediate=False, **properties):
if isinstance(body, tuple):
body, properties = body
elif isinstance(body, self.Message):
body, properties = body.body, body.properties
return self.connection._basic_publish(self.channel_id,
body, exchange, routing_key, properties,
mandatory or False, immediate or False)

The script crashes after trying to call _basic_publish.

For information, I've got the trusty's rabbitmq-server version (v 3.2.4-1).
Plus, replacing the call method by a cast method makes that a message is queued.

Could you please tell me if I'm doing something wrong? Is there a bug in the 
c-library used by librabbitmq?

Thanks beforehand,
Romain Ziba.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.utils] allow strutils.mask_password to mask keys dynamically

2015-03-20 Thread Matthew Van Dijk
I’ve come across a use case for allowing dynamic keys to be made
secret. The hardcoded list is good for common keys, but there will be
cases where masking a custom value is useful without having to add it
to the hardcoded list.
I propose we add an optional parameter that is a list of secret_keys
whose values will be masked.
There is concern that this will lead to differing levels of security.
But I disagree as either the message will be masked before passing on
or mask_password will be called. In this case the developer should be
aware of the incoming data and manually mask it.
Keeping with a hardcoded list discourages use of the function.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] canceling meeting

2015-03-20 Thread Vijay Venkatachalam

+1 For on demand meeting.

On demand lbaas meetings will happen in neutron meeting and not in Octavia 
meetings, right?

Sent from Surface

From: Susanne Ballemailto:sleipnir...@gmail.com
Sent: ‎Friday‎, ‎20‎ ‎March‎ ‎2015 ‎20‎:‎20
To: OpenStack Development Mailing List (not for usage 
questions)mailto:openstack-dev@lists.openstack.org

Make sense to me. Susanne

On Thu, Mar 19, 2015 at 5:49 PM, Doug Wiegley 
doug...@parksidesoftware.commailto:doug...@parksidesoftware.com wrote:
Hi lbaas'ers,

Now that lbaasv2 has shipped, the need for a regular weekly meeting is 
greatly reduced. I propose that we cancel the regular meeting, and discuss 
neutron-y things during the neutron on-demand agenda, and octavia things in the 
already existing octavia meetings.

Any objections/alternatives?

Thanks,
doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd release plans

2015-03-20 Thread 高田唯子
Thank you, Dmitry.
I agree!


Best Regards,
Yuiko Takada

2015-03-20 23:32 GMT+09:00 Imre Farkas ifar...@redhat.com:

 Hi Dmitry,

 Sounds good to me! ;-)

 Imre



 On 03/20/2015 01:59 PM, Dmitry Tantsur wrote:

 This is an informational email about upcoming ironic-discoverd-1.1.0
 [1]. If you're not interested in discoverd, you may safely skip it.


 Hi all!

 Do you know what time is coming? Release time! I'm hoping to align this
 ironic-discoverd release with the OpenStack one. Here's proposed plan,
 which will be in effect, unless someone disagrees:

 Apr 9: feature freeze. The goal is to leave me some time to test it with
 Ironic RC and in-progress devstack integration [2]. Between this point
 and the release day, git master can be considered a release candidate :)

 Apr 30: release and celebration. stable/1.1 is branched and master is
 opened for features.

 For better scoping I've untargeted everything from 1.1.0 milestone [1],
 except for thing I see as particularly important. We might add more if
 we have time before FF.

 Please let me know what you think.
 Cheers,
   Dmitry

 [1] https://launchpad.net/ironic-discoverd/+milestone/1.1.0
 [2] https://etherpad.openstack.org/p/DiscoverdDevStack

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

2015-03-20 Thread Walter A. Boring IV

On 03/19/2015 07:13 PM, liuxinguo wrote:


Hi Mike,

I have seen the patch at https://review.openstack.org/#/c/165990/ 
saying that huawei driver will be removed because “the maintainer does 
not have a CI reporting to ensure their driver integration is successful”.




Looking at this patch, there is no CI reporting from the Huawei Volume 
CI check.

Your CI needs to be up and stable, running on all patches.

But in fact we really have a CI months ago and it is really reporting 
to reviews, the most resently posts are:‍


*https://review.openstack.org/#/c/165796/

Post time:‍2015-3-19 0:14:56

*https://review.openstack.org/#/c/164697/

Post time: 2015-3-18 23:55:37

I don't see any 3rd PARTY CI Reporting here because the patch is in 
merge conflict.



*https://review.openstack.org/164702/

Post time: 2015-3-18 23:55:37


Same


*https://review.openstack.org/#/c/152401/

Post time: 3-18 23:08:45


This patch also has NO Huawei Volume CI check results.


From what I'm seeing there isn't any consistent evidence prooving that 
the Huawei Volume CI checks are stable and running on every Cinder patch.


Walt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Bug Triage - Call for Participation

2015-03-20 Thread Ivan Kolodyazhny
Hi OpenStack Developers!

I want to remind you that bug triage [1] is important part of contribution
to any project[2]. We've got a lot of not triaged bugs in Cinder [3].

Please, do not hesitate to triage bugs in Cinder [4] to make upcoming Kilo
release better.

You can ask any questions via openstack-dev mailing list or in
#openstack-cinder channel at Freenode.


[1] https://wiki.openstack.org/wiki/BugTriage
[2] https://wiki.openstack.org/wiki/How_To_Contribute
[3]
https://bugs.launchpad.net/cinder/+bugs?field.searchtext=orderby=-importancefield.status%3Alist=NEWassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onsearch=Search
[4] https://wiki.openstack.org/wiki/Cinder/Contributing#Triage_incoming_bugs

Regards,
Ivan Kolodyazhny,
e0ne in IRC,
Software Engineer,
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack storage configaration file(cinder-service)

2015-03-20 Thread Asselin, Ramy
You can follow the basic instructions here in FAQ:
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers

Ramy


From: Kamsali, RaghavendraChari (Artesyn) 
[mailto:raghavendrachari.kams...@artesyn.com]
Sent: Friday, March 20, 2015 4:20 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] openstack storage configaration file(cinder-service)

Hi,

Anyone can help how to configure local.conf file for storage node via cinder 
service in devstack.



Thanks and Regards,
Raghavendrachari kamsali | Software Engineer II  | Embedded Computing
Artesyn Embedded Technologies | 5th Floor, Capella Block, The V, Madhapur| 
Hyderabad, AP 500081 India
T +91-40-66747059 | M +919705762153

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] updated Fedora Atomic image available - needs testing

2015-03-20 Thread Steven Dake (stdake)
Hey folks,

I have manually updated the Fedora 21 Atomic image via rpm-ostree upgrade.  
This image includes kubernetes 0.11 which some people have said is required to 
use kubectl with current Magnum master.  I don’t have time for the next week to 
heavily test, but if someone could run this image through testing with Magnum, 
I’d appreciate it.

https://fedorapeople.org/groups/heat/kolla/fedora-21-atomic-2.qcow2
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

2015-03-20 Thread Yathiraj Udupi (yudupi)
Hi Tim,

I think what you are saying is a reasonable goal in terms of high-level 
Congress policies not having to depend on the domain-specific solver / policy 
engines.  As long as there are Congress adapters to transform the user’s 
policies to something that domain-specific solver understands it will be fine.

With respect to delegation to solver scheduler, I agree that we need a 
combination of options (1) and (2).
For option (2), in order to leverage the pre-defined constraint classes in 
Solver Scheduler,  it is more like the Congress adapter notifying the Solver 
Scheduler to include that constraint as part of the current placement decision. 
  Also we have to take it to account that it is not just the user-defined 
Congress policies that define the placement choices,  the already existing 
infrastructure-specific constraints, in terms of Host capacities, or any other 
provider specific constraints will have to be included in the placement 
decision calculation by the solver.   The pre-configured scheduler/placement 
constraints will always have to be included.  But some additional policies from 
Congress can introduce additional constraints.

For option (1), new custom constraint class -  Solver Scheduler already has an 
interface BaseLinearConstraint - 
https://github.com/stackforge/nova-solver-scheduler/blob/master/nova/scheduler/solvers/linearconstraints/__init__.py
with methods -  get_coefficient_vectors, get_variable_vectors, and 
get_operations that will be invoked by the main solver class to feed in the 
variables, and the host metrics, along with some input parameters that used to 
get additional metadata that can be used to build matrices.  Eventually the 
main solver class builds the LP program by invoking all the classes.  So for 
the Congress delegation scenario, it will be along these lines, where the 
Congress adapter will have to populate these matrices based on the Datalog 
policy.   So as part of the solver scheduler’s workflow this special 
CongressConstraint class will have to call the Congress adapter with the 
variables already known, and get the necessary values.
For reference, an example implementation of this constraint class is here - 
https://github.com/stackforge/nova-solver-scheduler/blob/master/nova/scheduler/solvers/linearconstraints/max_disk_allocation_constraint.py

Will need some more thoughts, but the approach seems reasonable.

Thanks,
Yathi.



On 3/18/15, 8:34 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:

I responded in the gdoc.  Here’s a copy.

One of my goals for delegation is to avoid asking people to write policy 
statements specific to any particular domain-specific solver.  People ought to 
encode policy however they like, and the system ought to figure out how best to 
enforce that policy  (delegation being one option).

Assuming that's a reasonable goal, I see two options for delegation to  
solverScheduler

(1) SolverScheduler exposes a custom constraint class.  Congress generates the 
LP program from the Datalog, similar to what is described in this doc, and 
gives that LP program as custom constraints to the  SolverScheduler.  
SolverScheduler is then responsible for enforcing that policy both during 
provisioning of new servers and for monitoring/migrating servers once 
provisioning is finished.

(2) The Congress adapter for SolverScheduler understands the semantics of 
MemoryCapacityConstraint, identifies when the user has asked for that 
constraint, and replaces that part of the LP program with the 
MemoryCapacityConstraint.

We probably want a combination of (1) and (2) so that we handle any gaps in the 
pre-defined constraints that SolverScheduler has, while at the same time 
leveraging the pre-defined constraints when possible.

Tim


On Mar 17, 2015, at 6:09 PM, Yathiraj Udupi (yudupi) 
yud...@cisco.commailto:yud...@cisco.com wrote:

Hi Tim,

I posted this comment on the doc.  I am still pondering over a possibility of 
have a policy-driven scheduler workflow via the Solver Scheduler placement 
engine, which is also LP based like you describe in your doc.
I know in your initial meeting, you plan to go over your proposal of building a 
VM placement engine that subscribes to the Congress DSE,  I probably will 
understand the Congress workflows better and see how I could incorporate this 
proposal to talk to the Solver Scheduler to make the placement decisions.

The example you provide in the doc, is a very good scenario, where a VM 
placement engine should continuously monitor and trigger VM migrations.

I am also interested in the case of a policy-driven scheduling for the initial 
creation of VMs. This is where say people will call Nova APIs and create a new 
set of VMs. Here the scheduler workflow should address the constraints as 
imposed from the user's policies.

Say the simple policy is  Host's free RAM = 0.25 * Memory_Capacity
I would like the scheduler to use this policy as defined from Congress, and 
apply 

[openstack-dev] [third-party]Properly format the status message with gerrit trigger

2015-03-20 Thread Jordan Pittier
Hi guys,
I am in charge of a third party CI (for Cinder). My setup is based on
Jenkins + gerrit trigger plugin. As you may know, its hard to customize
the message in the Gerrit Verified Commands config. In particular, its
not possible to add white/empty line. And you need white lines if you
have several builds for the same trigger.

As you know, the infra team wants the 3rd party CI to respect this format :
http://ci.openstack.org/third_party.html#posting-result-to-gerrit

Currenly gerrit trigger plugin reports in this format :

http://link.to/result : [SUCCESS|FAILURE]

As you see the test-name-no-spaces is missing...

I don't want to fork the gerrit-trigger (the code to change is here [1])
just for that. And, I know other people have faced the same issue. For some
obscure reason I don't want to install/use Zuuul.

So would it make sense to slightly change the regex [2] so that gerrit
trigger is also supported out of the box  ? That would make my life easier
:)

Thanks,
Jordan

[1]
https://github.com/jenkinsci/gerrit-trigger-plugin/blob/master/src/main/java/com/sonyericsson/hudson/plugins/gerrit/trigger/gerritnotifier/ParameterExpander.java#L531
[2]
https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/gerrit.pp#n164
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Pipeline for notifications does not seem to work

2015-03-20 Thread Tim Bell

I'm running Juno with ceilometer and trying to produce a new meter which is 
based on vcpus * F (where F is a constant that is different for each 
hypervisor).

When I create a VM, I get a new sample for vcpus.

However, it does not appear to fire the transformer.

The same approach using cpu works OK but this one is polling on a regular 
interval rather than a one off notification when the VM is created.

Any suggestions or alternative approaches for how to get a sample based the 
number of cores scaled by a fixed constant?

Tim

In my pipeline.yaml sources,

- name: vcpu_source
  interval: 180
  meters:
  - vcpus
  sinks:
  - hs06_sink

In my transformers, I have

- name: hs06_sink
  transformers:
  - name: unit_conversion
parameters:
target:
name: hs06
unit: HS06
type: gauge
scale: 47.0
  publishers:
  - notifier://



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Barbican : Unable to send PUT request to store the secret

2015-03-20 Thread John Wood
Hello Asha,

I missed this later email, sorry.

The content type on the PUT call determines the type of the secret (the first 
POST call only creates the metadata for the secret).

This older wiki page might help clarify things: 
https://github.com/cloudkeep/barbican/wiki/Application-Programming-Interface#two-step-binary-secret-createretrieve

Thanks,
John

From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Reply-To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, March 20, 2015 at 3:04 PM
To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Barbican : Unable to send PUT request to store the 
secret

Hi All ,

Now the command for put request  has been successful .the content type for the 
header needs to be text/plain.
I thought that the  datatype for the data parameters would determine the 
content type of the header.

For ex : In this case the data is passed in the following format   '{secret: 
{payload: secretput, payload_content_type: text/plain }}' which is the 
JSON type .

curl -X PUT -H 'content-type:text/plain' -H 'X-Project-Id: 12345' -d 
'{secret: {payload: secretput, payload_content_type: text/plain }}' 
http://localhost:9311/v1/secrets/89d424c3-f4c1-4822-8bd7-7691f40f7ba3

Could anyone provide the clarity on content type of the header

Thanks and Regards,
Asha Seshagiri

On Fri, Mar 20, 2015 at 2:05 PM, Asha Seshagiri 
asha.seshag...@gmail.commailto:asha.seshag...@gmail.com wrote:
Hi All ,

I am unable to send the PUT request using the CURL command for storing the 
secret .


root@barbican:~# curl -X POST -H 'content-type:application/json' -H 
'X-Project-Id: 12345' -d '{secret: {name: secretname, algorithm: aes, 
bit_length  : 256, mode: cbc}}' 
http://localhost:9311/v1/secrets
{secret_ref: 
http://localhost:9311/v1/secrets/84aaac35-daa9-4ffb-b03e-18596729705d}

curl -X PUT -H 'content-type:application/json' -H 'X-Project-Id: 12345' -d 
'{secret: {payload: secretput, payload_content_type: text/plain }}' 
http://localhost:9311/v1/secrets
{code: 405, description: , title: Method Not Allowed}

It would be great if some one could help me .
Thanks in advance.

--
Thanks and Regards,
Asha Seshagiri



--
Thanks and Regards,
Asha Seshagiri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

2015-03-20 Thread liuxinguo
Yes, currently our CI is not stable now, but it is really reporting, and have a 
look at all the other CI’s report, I think some of them are really not stable 
too.


The server of review.openstack.org is located at the United States (U.S.) and 
the network is really not good between our CI and the review server.

Till now we are really working hard for this and our CI will be moved to a more 
stable network soon.


· And we really have a whole test team to test our drivers out side of 
the CI, maybe more strict than the tests in CI.
·
Thanks Walt




发件人: Walter A. Boring IV [mailto:walter.bor...@hp.com]
发送时间: 2015年3月20日 23:49
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

On 03/19/2015 07:13 PM, liuxinguo wrote:
Hi Mike,

I have seen the patch at https://review.openstack.org/#/c/165990/ saying that 
huawei driver will be removed because “the maintainer does not have a CI 
reporting to ensure their driver integration is successful”.

Looking at this patch, there is no CI reporting from the Huawei Volume CI check.
Your CI needs to be up and stable, running on all patches.



But in fact we really have a CI months ago and it is really reporting to 
reviews, the most resently posts are:‍

*https://review.openstack.org/#/c/165796/
Post time:‍ 2015-3-19 0:14:56

*https://review.openstack.org/#/c/164697/
Post time: 2015-3-18 23:55:37
I don't see any 3rd PARTY CI Reporting here because the patch is in merge 
conflict.



*https://review.openstack.org/164702/
Post time: 2015-3-18 23:55:37
Same



*https://review.openstack.org/#/c/152401/
Post time: 3-18 23:08:45
This patch also has NO Huawei Volume CI check results.


From what I'm seeing there isn't any consistent evidence prooving that the 
Huawei Volume CI checks are stable and running on every Cinder patch.

Walt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd release plans

2015-03-20 Thread Mitsuhiro SHIGEMATSU
Dmitry,

Thank you! Great!

pshige

2015-03-21 0:18 GMT+09:00 高田唯子 yuikotakada0...@gmail.com:
 Thank you, Dmitry.
 I agree!


 Best Regards,
 Yuiko Takada

 2015-03-20 23:32 GMT+09:00 Imre Farkas ifar...@redhat.com:

 Hi Dmitry,

 Sounds good to me! ;-)

 Imre



 On 03/20/2015 01:59 PM, Dmitry Tantsur wrote:

 This is an informational email about upcoming ironic-discoverd-1.1.0
 [1]. If you're not interested in discoverd, you may safely skip it.


 Hi all!

 Do you know what time is coming? Release time! I'm hoping to align this
 ironic-discoverd release with the OpenStack one. Here's proposed plan,
 which will be in effect, unless someone disagrees:

 Apr 9: feature freeze. The goal is to leave me some time to test it with
 Ironic RC and in-progress devstack integration [2]. Between this point
 and the release day, git master can be considered a release candidate :)

 Apr 30: release and celebration. stable/1.1 is branched and master is
 opened for features.

 For better scoping I've untargeted everything from 1.1.0 milestone [1],
 except for thing I see as particularly important. We might add more if
 we have time before FF.

 Please let me know what you think.
 Cheers,
   Dmitry

 [1] https://launchpad.net/ironic-discoverd/+milestone/1.1.0
 [2] https://etherpad.openstack.org/p/DiscoverdDevStack


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

2015-03-20 Thread liuxinguo
It is huawei-volume-ci that is on behalf of huawei 18000 iSCSI and huawei 18000 
FC driver, not huawei-ci. I am sorry for these two ci names so similar.

And I think the point is: Does the requirement is really a stable CI, and if 
one CI is not stable, can it make a exemption like NetApp FC drivers?
I think this is the point.

Thanks,
Liu

-邮件原件-
发件人: Mike Perez [mailto:thin...@gmail.com] 
发送时间: 2015年3月21日 2:28
收件人: jsbry...@electronicjungle.net; OpenStack Development Mailing List (not for 
usage questions)
主题: Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

On 09:41 Fri 20 Mar , Jay S. Bryant wrote:
 Mike,
 
 Looks like this removal may have been a mistake.  We should readdress.

This was not a mistake. As Walt has mentioned that CI run failed anyways. Also 
if you take a look at Huawei's CI reporting history, it's not that often AND 
not reliable [1].

This is not satisfactory meeting the requirements. If we're saying they're 
having networking issues from January to now, this really sounds like to me it 
was *not* a priority.

[1] - 
https://review.openstack.org/#/q/reviewer:+huawei-ci+project:openstack/cinder,n,z

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Barbican : Unable to send PUT request to store the secret

2015-03-20 Thread John Wood
Hello Asha,

First, please add '[barbican]' to the subject line to indicate this thread is 
of Barbican interest only. So for example, the revised subject line would look 
like:
Re: [openstack-dev] [barbican] Unable to send PUT request to store the secret

You appear to be doing a two step secret upload, in which case the second step 
(the PUT) needs the full URI to the secret...the URI in your PUT call is 
missing the UUID in other words.

This demo script may help you as well: 
https://github.com/openstack/barbican/blob/master/bin/demo_requests.py#L85

Thanks,
John


From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Reply-To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, March 20, 2015 at 2:05 PM
To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] Barbican : Unable to send PUT request to store the 
secret

Hi All ,

I am unable to send the PUT request using the CURL command for storing the 
secret .


root@barbican:~# curl -X POST -H 'content-type:application/json' -H 
'X-Project-Id: 12345' -d '{secret: {name: secretname, algorithm: aes, 
bit_length  : 256, mode: cbc}}' 
http://localhost:9311/v1/secrets
{secret_ref: 
http://localhost:9311/v1/secrets/84aaac35-daa9-4ffb-b03e-18596729705d}

curl -X PUT -H 'content-type:application/json' -H 'X-Project-Id: 12345' -d 
'{secret: {payload: secretput, payload_content_type: text/plain }}' 
http://localhost:9311/v1/secrets
{code: 405, description: , title: Method Not Allowed}

It would be great if some one could help me .
Thanks in advance.

--
Thanks and Regards,
Asha Seshagiri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-20 Thread Chris Friesen

Hi,

I've recently been playing around a bit with API microversions and I noticed 
something that may be problematic.


The way microversions are handled, there is a monotonically increasing 
MAX_API_VERSION value in nova/api/openstack/api_version_request.py.  When you 
want to make a change you bump the minor version number and it's yours. 
End-users can set the microversion number in the request to indicate what they 
support, and all will be well.


The issue is that it doesn't allow for OpenStack providers to add their own 
private microversion(s) to the API.  They can't just bump the microversion 
internally because that will conflict with the next microversion bump upstream 
(which could cause problems when they upgrade).


In terms of how to deal with this, it would be relatively simple to just bump 
the major microversion number at the beginning of each new release.  However, 
that would make it difficult to backport bugfixes/features that use new 
microversions since they might overlap with private microversions.


I think a better solution might be to expand the existing microversion API to 
include a third digit which could be considered a private microversion, and 
provide a way to check the third digit separate from the other two.  That way 
providers would have a way to add custom features in a backwards-compatible way 
without worrying about colliding with upstream code.


Thoughts?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Announcing Bifrost

2015-03-20 Thread Julia Kreger
At present, the only configuration in which ironic is tested is as part of
a complete  OpenStack cloud. While this is also the most common usage,
there has  been significant interest in running Ironic outside of OpenStack
contexts as an independent service for provisioning hardware in trusted
environments. This work was done initially as a proof-of-concept to
determine the viability of that use case.

Bifrost is a set of Ansible playbooks that automates the task of deploying
a base image onto a set of known hardware using Ironic. It provides modular
utility for one-off operating system deployment with as few operational
requirements as reasonably possible. Our intention is to utilize this to
further develop tooling around and testing of Ironic in the context of a
standalone installation.

How it works:

Bifrost works by installing Ironic with no other OpenStack services on a
node, coupled with the minimum required infrastructure (DHCP, TFTP, HTTP)
to utilize the Ironic-Python-Agent method of deploying a node. We intend to
add support for additional deployment methods as time goes on.

After installation, a hardware inventory file is read, and all the hardware
is enrolled with Ironic. This is also used to generate node-specific
config-drives containing SSH keys and static network configuration, which
are written out during deployment.

Post-deployment, the deployed machines are configured to boot from their
local disks. The Bifrost node can be completely removed, and all machines
will remain operational.

If by chance, you don't have any hardware available to test Bifrost on, a
testing mode exists which can leverage virtual machines on a Linux system.
This will be used to provide CI for Bifrost, and, we hope, functional
testing of Ironic in the near future. Checkout the README for more
information.

The README can be found at
https://github.com/juliakreger/bifrost/blob/master/README.rst

Special thanks to:

David Shrewsbury
Monty Taylor
Devananda van der Veen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] canceling meeting

2015-03-20 Thread Brandon Logan
That is correct.

On Fri, 2015-03-20 at 15:11 +, Vijay Venkatachalam wrote:
 
 
 +1 For on demand meeting.
 
 
 On demand lbaas meetings will happen in neutron meeting and not in
 Octavia meetings, right?
 
 
 Sent from Surface
 
 
 From: Susanne Balle
 Sent: ‎Friday‎, ‎20‎ ‎March‎ ‎2015 ‎20‎:‎20
 To: OpenStack Development Mailing List (not for usage questions)
 
 
 Make sense to me. Susanne
 
 On Thu, Mar 19, 2015 at 5:49 PM, Doug Wiegley
 doug...@parksidesoftware.com wrote:
 Hi lbaas'ers,
 
 Now that lbaasv2 has shipped, the need for a regular weekly
 meeting is greatly reduced. I propose that we cancel the
 regular meeting, and discuss neutron-y things during the
 neutron on-demand agenda, and octavia things in the already
 existing octavia meetings.
 
 Any objections/alternatives?
 
 Thanks,
 doug
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [cinder] Could you please re-consider Oracle ZFS/SA Cinder drivers (iSCSI and NFS)

2015-03-20 Thread Mike Perez
On 12:37 Fri 20 Mar , Alka Deshpande wrote:
 Hi MIke,
 
 My team and I would like to respectfully request a revert for the
 change at:https://review.openstack.org/#/c/165939/ to bring back the
 ZFSSA drivers to Kilo. Oracle has the CI working internally and we
 are in the process of going through internal security review, which
 is expected to be done by the end of March.
 
 In the mean time, we have gotten permission to post test results to
 an external webserver. We will have results reported by the end of
 the week. If need be, we can show proof of  CI running internally by
 pasting results to pastebin: http://paste.openstack.org/show/193964/
 
 We have been working diligently on setting up the CI system, and we
 have been keeping you updated since the mid-cycle meetup in Austin.
 We really appreciate your consideration.  We do take CI setup effort
 very seriously and striving to get it to the level of your
 satisfaction.

The tag for Kilo in Cinder has already happened, so it's too late to revert. We
may be able to revisit this in Kilo RC, but I want to see your CI reporting
reliably now to then to Cinder reviews.

FWIW, everyone who was removed told me they were taking it seriously too. Just
not serious enough to have CI's reporting by the overly announced deadline.
[1][2][3][4][5][6][7][8]

[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html
[2] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
[3] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-21-16.00.log.html
[4] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-04-16.04.log.html
[5] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-18-16.00.log.html
[6] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-25-16.00.log.html
[7] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-03-04-16.00.log.html
[8] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-03-18-16.00.log.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-20 Thread Jay Pipes

On 03/11/2015 06:48 PM, John Belamaric wrote:

This has been settled and we're not moving forward with it for Kilo. I
agree tenants are an administrative concept, not a networking one so
using them for uniqueness doesn't really make sense.

In Liberty we are proposing a new grouping mechanism, as you call it,
specifically for the purpose of defining uniqueness - address scopes.
This would be owned by a tenant but could be shared across tenants. It's
still in the early stages of definition though, and more discussion is
needed but should probably wait until after Kilo is out!


This is a question purely out of curiousity. Why is Neutron averse to 
the concept of using tenants as natural ways of dividing up the cloud -- 
which at its core means multi-tenant, on-demand computing and networking?


Is this just due to a lack of traditional use of the term in networking 
literature? Or is this something more deep-grained (architecturally) 
than that?


Genuinely curious.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][FFE] - Reseller Implementation

2015-03-20 Thread Geoff Arnold
Glad to see this FFE. The Cisco Cloud Services team is very interested in the 
Reseller use case, and in a couple of possible extensions of the work. 
http://specs.openstack.org/openstack/keystone-specs/specs/kilo/reseller.html 
http://specs.openstack.org/openstack/keystone-specs/specs/kilo/reseller.html 
covers the Keystone use cases, but there are several other developments 
required in other OpenStack projects to come up with a complete reseller 
“solution”. For my information, has anyone put together an overarching 
blueprint which captures the top level Reseller use cases and identifies all of 
the sub-projects and their dependencies? If so, wonderful. If not, we might try 
to work on this in the new Product Management WG.

I mentioned “extensions” to 
http://specs.openstack.org/openstack/keystone-specs/specs/kilo/reseller.html 
http://specs.openstack.org/openstack/keystone-specs/specs/kilo/reseller.html 
. There are two that we’re thinking about:
- the multi-provider reseller: adding the user story where Martha buys 
OpenStack services from two or more 
  providers and presents them to her customers through a single Horizon instance
- stacked reseller: Martha buys OpenStack services from a provider, Alex, and 
also from a reseller, Chris, who 
  purchases OpenStack services from multiple providers 

In each case, the unit of resale is a “virtual region”: a provider region 
subsetted using HMT/domains, with IdP supplied by the reseller, and constrained 
by resource consumption policies (e.g. Nova AZ “foo” is not available to 
customers of reseller “bar”).

I strongly doubt that any of this is particularly original, but I haven’t seen 
it written up anywhere.

Cheers,

Geoff Arnold
Cisco Cloud Services
geoar...@cisco.com mailto:geoar...@cisco.com
ge...@geoffarnold.com mailto:ge...@geoffarnold.com
@geoffarnold

 On Mar 19, 2015, at 11:22 AM, Raildo Mascena rail...@gmail.com wrote:
 
 In addition, 
 
 In the last keystone meeting in March 17 in the IRC channel 
 http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2015-03-17.log
  we decided that Henry Nash (keystone core) will sponsor this feature as a 
 FFE.
 
 Cheers,
 
 Raildo
 
 On Tue, Mar 17, 2015 at 5:36 PM Raildo Mascena rail...@gmail.com 
 mailto:rail...@gmail.com wrote:
 Hi Folks,
 
 We’ve discussed a lot in the last Summit about the Reseller use case. 
 OpenStack needs to grow support for hierarchical ownership of objects.This 
 enables the management of subsets of users and projects in a way that is much 
 more comfortable for private clouds, besides giving to public cloud providers 
 the option of reselling a piece of their cloud.
 
 More detailed information can be found in the spec for this change at: 
 https://review.openstack.org/#/c/139824 
 https://review.openstack.org/#/c/139824
 
 The current code change for this is split into 8 patches (to make it easier 
 to review). We currently have 7 patches in code review and we are finishing 
 the last one.
 
 Here is the workflow of our patches:
 
 1- Adding a field to enable the possibility to have a project with the domain 
 feature: https://review.openstack.org/#/c/157427/ 
 https://review.openstack.org/#/c/157427/
 
 2- Change some constraints and create some options to list projects (for 
 is_domain flag, for parent_id):
 https://review.openstack.org/#/c/159944/ 
 https://review.openstack.org/#/c/159944/
 https://review.openstack.org/#/c/158398/ 
 https://review.openstack.org/#/c/158398/
 https://review.openstack.org/#/c/161378/ 
 https://review.openstack.org/#/c/161378/
 https://review.openstack.org/#/c/158372/ 
 https://review.openstack.org/#/c/158372/
 
 3- Reflect domain operations to project table, mapping domains to projects 
 that have the is_domain attribute set to True. In addition, it changes the 
 read operations to use only the project table. Then, we will drop the Domain 
 Table.
 https://review.openstack.org/#/c/143763/ 
 https://review.openstack.org/#/c/143763/
 https://review.openstack.org/#/c/161854/ 
 https://review.openstack.org/#/c/161854/ (Only patch with work in progress)
 
 4- Finally, the inherited role will not be applied to a subdomain and its sub 
 hierarchy. https://review.openstack.org/#/c/164180/ 
 https://review.openstack.org/#/c/164180/
 
 Since we have the implementation almost completed, waiting for code review, I 
 am requesting an FFE to enable the implementation of this last patch and work 
 to have this implementation merged in Kilo.
 
 
 Raildo
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-20 Thread Armando M.
In order to track this, and for Kyle's sanity, I have created these two RC1
bugs:

- https://bugs.launchpad.net/neutron/+bug/1434667
- https://bugs.launchpad.net/neutron/+bug/1434671

Please, let's make sure that whatever approach we decide on, the resulting
code fix targets those two bugs.

Thanks,
Armando

On 20 March 2015 at 09:51, Armando M. arma...@gmail.com wrote:



 On 19 March 2015 at 23:59, Akihiro Motoki amot...@gmail.com wrote:

 Forwarding my reply to the other thread too...

 Multiple threads on the same topic is confusing.
 Can we use this thread if we continue the discussion?
 (The title of this thread looks approapriate)

 
 API extension is the only way that users know which features are
 available unitl we support API microversioning (v2.1 or something).
 I believe VLAN transparency support should be implemented as an
 extension, not by changing the core resources attribute directly.
 Otherwise users (including Horizon) cannot know we field is available or
 not.

 Even though VLAN transparency and MTU suppotrs are basic features, it
 is better to be implemented as an extension.
 Configuration does not help from API perspective as it is not visible
 through the API.


 I was only suggesting the configuration-based approach because it was
 simpler and it didn't lead to the evil mixin business. Granted it does not
 help from the API perspective, but we can hardly claim good discoverability
 of the API capabilities anyway :)

 That said, I'd be ok with moving one or both of these attributes to the
 extension framework. I thought that consensus on having them as core
 resources had been reached at the time the spec proposal.



 We are discussing moving away from extension attributes as Armando
 commented,
 but I think it is discussed about resources/attributes which are
 already used well and required.
 It looks natural to me that new resources/attributes are implemented
 via an extension.
 The situation may be changed once we have support of API microversioning.
 (It is being discussed in the context of Nova API microvesioning in
 the dev list started by Jay Pipes.)

 In my understanding, the case of IPv6 two mode is an exception.
 From the initial design we would like to have fully support of IPv6 in
 subnet resource,
 but through the discussion of IPv6 support it turns out some more
 modes are required,
 and we decided to change the subnet core resource. It is the exception.

 Thanks,
 Akihiro

 2015-03-20 8:23 GMT+09:00 Armando M. arma...@gmail.com:
  Forwarding my reply to the other thread here:
 
  
 
  If my memory does not fail me, changes to the API (new resources, new
  resource attributes or new operations allowed to resources) have always
 been
  done according to these criteria:
 
  an opt-in approach: this means we know the expected behavior of the
 plugin
  as someone has coded the plugin in such a way that the API change is
  supported;
  an opt-out approach: if the API change does not require explicit backend
  support, and hence can be deemed supported by all plugins.
  a 'core' extension (ones available in neutron/extensions) should be
  implemented at least by the reference implementation;
 
  Now, there might have been examples in the past where criteria were not
 met,
  but these should be seen as exceptions rather than the rule, and as
 such,
  fixed as defects so that an attribute/resource/operation that is
  accidentally exposed to a plugin will either be honored as expected or
 an
  appropriate failure is propagated to the user. Bottom line, the server
 must
  avoid to fail silently, because failing silently is bad for the user.
 
  Now both features [1] and [2] violated the opt-in criterion above: they
  introduced resources attributes in the core models, forcing an
 undetermined
  behavior on plugins.
 
  I think that keeping [3,4] as is can lead to a poor user experience; IMO
  it's unacceptable to let a user specify the attribute, and see that
  ultimately the plugin does not support it. I'd be fine if this was an
  accident, but doing this by design is a bit evil. So, I'd suggest the
  following, in order to keep the features in Kilo:
 
  Patches [3, 4] did introduce config flags to control the plugin
 behavior,
  but it looks like they were not applied correctly; for instance, the
  vlan_transparent case was only applied to ML2. Similarly the MTU config
 flag
  was not processed server side to ensure that plugins that do not support
  advertisement do not fail silently. This needs to be rectified.
  As for VLAN transparency, we'd need to implement work item 5 (of 6) of
 spec
  [2], as this extension without at least a backend able to let tagged
 traffic
  pass doesn't seem right.
  Ensure we sort out the API tests so that we know how the features
 behave.
 
  Now granted that controlling the API via config flags is not the best
  solution, as this was always handled through the extension mechanism,
 but
  since we've been talking about moving away 

Re: [openstack-dev] [cinder] Request exemption for removal of NetApp FC drivers (no voting CI)

2015-03-20 Thread Mike Perez
On 12:33 Fri 20 Mar , ClaytonLuce, Timothy wrote:
 I'd like to point out that for NetApp FC drivers NetApp has been in
 discussions and updating progress on these drivers since their submission.
 
 I will point out a discussion in the Nov Core meeting where I brought up the
 challenge around FC environments and the response I received:

snip

 NetApp has in good faith been working toward implementing a CI for FC,
 I won't go into the challenges of spending $$ for lab equipment to build out
 a scalable quality CI system but suffice it to say the lab equipment is on
 order and scheduled for arrival the first part of April, at which point we
 can put in place the CI for FC.

1) We've been talking about CI's since Feburary 2014. That's really too bad
   this took so long. The deadline itself has been overly announced on the
   mailing list and Cinder IRC meetings. [1][2][3][4][5][6][7][8]

2) We have a number of FC drivers today that had no problem meeting this
   deadline that was expressed in November 2014.

3) I've barely received updates from Netapp folks on progress here. I'm the
   only point of contact, so if you weren't talking to me, then it's unknown.
   I've expressed this to a number of your engineers and in my announcements
   about the CI deadline [8]
   
I had to engage with Netapp to get updates, no one came to me with updates. The
last update I heard from one of your engineers was, we bought the hardware,
but it's just sitting there. That is not acceptable with us being past the
deadline, and shows a clear sign of this not being a priority.

 NetApp has been very forthcoming in our progress and have gotten all our
 other CI systems in place for 7-mode iSCSI/NFS, cDOT iSCSI/NFS and E-Series.
 
 I respectfully request that NetApp FC be removed from this list of drivers to
 be removed for Kilo and placed back in the releaes and we can negotiate an
 agreed upon time as to when the CI system for these drivers will be in place.

There will be no negotiating on what is an acceptable timeline for Netapp. What
we all agreed to as a *community* back at the summit and Cinder IRC meeting
was it.


[1] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
[2] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-21-16.00.log.html
[3] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-04-16.04.log.html
[4] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-18-16.00.log.html
[5] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-25-16.00.log.html
[6] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-03-04-16.00.log.html
[7] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-03-18-16.00.log.html
[8] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Carl Baldwin
On Fri, Mar 20, 2015 at 12:18 PM, Jay Pipes jaypi...@gmail.com wrote:
 2) Is the action of creating a subnet from a pool better realized as a
 different way of creating a subnet, or should there be some sort of
 pool action? Eg.:

 POST /subnet_pools/my_pool_id/subnet
 {'prefix_len': 24}

 which would return a subnet response like this (note prefix_len might
 not be needed in this case)

 {'id': 'meh',
   'cidr': '192.168.0.0/24 http://192.168.0.0/24',
   'gateway_ip': '192.168.0.1',
   'pool_id': 'my_pool_id'}

 I am generally not a big fan of RESTful actions. But in this case the
 semantics of the API operation are that of a subnet creation from within
 a pool, so that might be ok.


 +1 to using resource subcollection semantics here.

The issue I see here is that there is a window of time between
requesting the subnet allocation and creating the subnet when the
subnet could be taken by someone else.  We need to acknowledge the
window and address it somehow.

Does IPAM hold a reservation or something on the subnet to lock out
others?  Or, does the client just retry on failure?  If there are
multiple clients requesting subnet allocations, it seems that IPAM
should keep some state (e.g. a reservation) to avoid giving out the
same one more than once to difference clients at least.

I think that the first operation should result in full allocation of
the subnet to the tenant.  In this case, I think that the allocation
should have an id and be a first class object (it is not currently).
The tenant will need to manage these allocations like anything else.
The tenant will also be required to delete unused allocations.  This
might be the way to go long-term.

If allocations have an id.  I think I'd have the client pass in the
allocation id instead of the pool id to the subnet create to
differentiate between asking for a new allocation and using an
existing allocation.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-20 Thread Carl Baldwin
+1  Would like to hear feedback hoping that deprecation is viable.

Carl

On Fri, Mar 20, 2015 at 12:57 PM, Assaf Muller amul...@redhat.com wrote:
 Hello everyone,

 The use_namespaces option in the L3 and DHCP Neutron agents controls if you
 can create multiple routers and DHCP networks managed by a single L3/DHCP 
 agent,
 or if the agent manages only a single resource.

 Are the setups out there *not* using the use_namespaces option? I'm curious as
 to why, and if it would be difficult to migrate such a setup to use 
 namespaces.

 I'm asking because use_namespaces complicates Neutron code for what I gather
 is an option that has not been relevant for years. I'd like to deprecate the 
 option
 for Kilo and remove it in Liberty.


 Assaf Muller, Cloud Networking Engineer
 Red Hat

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Jay Pipes

On 03/20/2015 02:51 PM, Carl Baldwin wrote:

On Fri, Mar 20, 2015 at 12:18 PM, Jay Pipes jaypi...@gmail.com wrote:

What about this instead?

POST /v2.0/subnets

{
   'network_id': 'meh',
   'gateway_ip_template': '*.*.*.1'
   'prefix_len': 24,
   'pool_id': 'some_pool'
}

At least that way it's clear the gateway attribute is not an IP, but a
template/string instead?


I thought about doing *s but in the world of Classless Inter-Domain
Routing where not all networks are /24, /16, or /8 it seemed a bit
imprecise.  But, maybe that doesn't matter.


Understood.


I think the more important difference with your proposal here is that
it is passed as a new attribute called 'gateway_ip_template'.  I don't
think that attribute would ever be sent back to the user.  Is it ok to
have write-only attributes?  Is everyone comfortable with that?


I don't see anything wrong with attributes that are only in the request. 
I mean, we have attributes that are only in the response (things like 
status, for example).


Looking at the EC2 API, they support write-only attributes as well, 
for just this purpose:


http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html

The MaxCount and MinCount attributes are not in the response but are in 
the request. Same thing for Nova's POST /servers REST API (min_count, 
max_count).


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

2015-03-20 Thread Mike Perez
On 09:41 Fri 20 Mar , Jay S. Bryant wrote:
 Mike,
 
 Looks like this removal may have been a mistake.  We should readdress.

This was not a mistake. As Walt has mentioned that CI run failed anyways. Also
if you take a look at Huawei's CI reporting history, it's not that often AND
not reliable [1].

This is not satisfactory meeting the requirements. If we're saying they're
having networking issues from January to now, this really sounds like to me it
was *not* a priority.

[1] - 
https://review.openstack.org/#/q/reviewer:+huawei-ci+project:openstack/cinder,n,z

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] vnic_type in OS::Neutron::Port

2015-03-20 Thread Rob Pothier (rpothier)

Hi All,

It was brought to my attention that the recent changes with the vnic_type
possibly should not include the colon in the property value.
In the earlier versions of the review the colon was not in the property,
and was appended later.

https://review.openstack.org/#/c/129353/4..5/heat/engine/resources/neutron/port.py

let me know if this should go back to version 4, and I will open a bug
and fix it.

Thanks - Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [cinder] Could you please re-consider Oracle ZFS/SA Cinder drivers (iSCSI and NFS)

2015-03-20 Thread Alka Deshpande


On 3/20/15 2:19 PM, Mike Perez wrote:

On 12:37 Fri 20 Mar , Alka Deshpande wrote:

Hi MIke,

My team and I would like to respectfully request a revert for the
change at:https://review.openstack.org/#/c/165939/ to bring back the
ZFSSA drivers to Kilo. Oracle has the CI working internally and we
are in the process of going through internal security review, which
is expected to be done by the end of March.

In the mean time, we have gotten permission to post test results to
an external webserver. We will have results reported by the end of
the week. If need be, we can show proof of  CI running internally by
pasting results to pastebin: http://paste.openstack.org/show/193964/

We have been working diligently on setting up the CI system, and we
have been keeping you updated since the mid-cycle meetup in Austin.
We really appreciate your consideration.  We do take CI setup effort
very seriously and striving to get it to the level of your
satisfaction.

The tag for Kilo in Cinder has already happened, so it's too late to revert. We
may be able to revisit this in Kilo RC, but I want to see your CI reporting
reliably now to then to Cinder reviews.
Thanks for the revisit consideration for Kilo RC. My team members will 
be in touch with you with the

CI reporting information, shortly.

thanks
-Alka


FWIW, everyone who was removed told me they were taking it seriously too. Just
not serious enough to have CI's reporting by the overly announced deadline.
[1][2][3][4][5][6][7][8]

[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html
[2] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
[3] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-21-16.00.log.html
[4] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-04-16.04.log.html
[5] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-18-16.00.log.html
[6] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-02-25-16.00.log.html
[7] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-03-04-16.00.log.html
[8] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-03-18-16.00.log.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Carl Baldwin
On Fri, Mar 20, 2015 at 12:18 PM, Jay Pipes jaypi...@gmail.com wrote:
 What about this instead?

 POST /v2.0/subnets

 {
   'network_id': 'meh',
   'gateway_ip_template': '*.*.*.1'
   'prefix_len': 24,
   'pool_id': 'some_pool'
 }

 At least that way it's clear the gateway attribute is not an IP, but a
 template/string instead?

I thought about doing *s but in the world of Classless Inter-Domain
Routing where not all networks are /24, /16, or /8 it seemed a bit
imprecise.  But, maybe that doesn't matter.

I think the more important difference with your proposal here is that
it is passed as a new attribute called 'gateway_ip_template'.  I don't
think that attribute would ever be sent back to the user.  Is it ok to
have write-only attributes?  Is everyone comfortable with that?

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-20 Thread Assaf Muller
Hello everyone,

The use_namespaces option in the L3 and DHCP Neutron agents controls if you
can create multiple routers and DHCP networks managed by a single L3/DHCP agent,
or if the agent manages only a single resource.

Are the setups out there *not* using the use_namespaces option? I'm curious as
to why, and if it would be difficult to migrate such a setup to use namespaces.

I'm asking because use_namespaces complicates Neutron code for what I gather
is an option that has not been relevant for years. I'd like to deprecate the 
option
for Kilo and remove it in Liberty.


Assaf Muller, Cloud Networking Engineer
Red Hat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

2015-03-20 Thread Andreas Jaeger
On 03/20/2015 04:48 PM, Walter A. Boring IV wrote:
 [...]
  

 *https://review.openstack.org/#/c/152401/

 Post time: 3-18 23:08:45

 This patch also has NO Huawei Volume CI check results.

It has results on patches 5, 6, 7, 25, 26, 28, and 45. 45 was the only
successfull one. To see all of them you need to Toggle CI,

Still I agree with your conclusion that this is not consistant evidence,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Carl Baldwin
On Fri, Mar 20, 2015 at 12:58 PM, Dean Troyer dtro...@gmail.com wrote:
 I thought about doing *s but in the world of Classless Inter-Domain
 Routing where not all networks are /24, /16, or /8 it seemed a bit
 imprecise.  But, maybe that doesn't matter.


 So do a CIDR host address:  0.0.0.1/24 can be merged into a subnet just as
 easily as it can be masked out.

Dean, I'm not sure what you're suggesting.  Are you suggesting that
0.0.0.1/24 is suitable as a template for the gateway?  If so, that is
what I had originally proposed in the spec and what others seemed to
object to.

Maybe what others didn't like was that a template was passed in for
gateway_ip.  In this case, does passing a template like this as
'gateway_ip_template' make them feel better?

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Jay Pipes

On 03/09/2015 09:05 AM, Salvatore Orlando wrote:

POST /v2.0/subnets
{'network_id': 'meh',
  'gateway_ip': '0.0.0.1',
  'prefix_len': 24,
  'pool_id': 'some_pool'}

would indicate that the user wishes to use the first address in the
range as the gateway IP, and the API would return something like this:


Yeah, the above is definitely icky. (technical term, there...)

What about this instead?

POST /v2.0/subnets

{
  'network_id': 'meh',
  'gateway_ip_template': '*.*.*.1'
  'prefix_len': 24,
  'pool_id': 'some_pool'
}

At least that way it's clear the gateway attribute is not an IP, but a 
template/string instead?



2) Is the action of creating a subnet from a pool better realized as a
different way of creating a subnet, or should there be some sort of
pool action? Eg.:

POST /subnet_pools/my_pool_id/subnet
{'prefix_len': 24}

which would return a subnet response like this (note prefix_len might
not be needed in this case)

{'id': 'meh',
  'cidr': '192.168.0.0/24 http://192.168.0.0/24',
  'gateway_ip': '192.168.0.1',
  'pool_id': 'my_pool_id'}

I am generally not a big fan of RESTful actions. But in this case the
semantics of the API operation are that of a subnet creation from within
a pool, so that might be ok.


+1 to using resource subcollection semantics here.


3) Would it be possible to consider putting information about how to
generate a subnet from a pool in the subnet request body as follows?

POST /v2.0/subnets
{
  'pool_info':
 {'pool_id': my_pool_id,
  'prefix_len': 24}
}


-1. Too complicated IMO.

Best,
-jay


This would return a response like the previous.
This approach is in theory simple, but composite attributes proved to a
difficult beast already - for instance you can look at
external_gateway_info in the router definition [4]

Thanks for your time and thanks in advance for your feedback.
Salvatore

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/subnet-allocation.html
[2] https://review.openstack.org/#/c/148698/
[3] https://review.openstack.org/#/c/157597/21/neutron/api/v2/attributes.py
[4]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/l3.py#n106


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty specs are now open

2015-03-20 Thread Neil Jerram
Neil Jerram neil.jer...@metaswitch.com writes:

 Hi Michael,

 Michael Still mi...@stillhq.com writes:

 For specs approved in Juno or Kilo, there is a fast track approval
 process for Liberty. The steps to get your spec re-approved are:

  - Copy your spec from the specs/oldrelease/approved directory to
 the specs/liberty/approved directory. [...]

 Perhaps I'm misunderstanding, or have failed to pull the latest
 nova-specs repo correctly, but I don't see a specs/liberty/approved
 directory.  Do you mean just specs/liberty ?

Ah, I guess the intention may be that a proposed spec patch would
_create_ the specs/liberty/approved directory.  Is that right?

   Neil
   

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Unable to send PUT request to store the secret

2015-03-20 Thread Asha Seshagiri
Hi All ,

I am unable to send the PUT request using the CURL command for storing the
secret .


root@barbican:~# curl -X POST -H 'content-type:application/json' -H
'X-Project-Id: 12345' -d '{secret: {name: secretname, algorithm:
aes, bit_length  : 256, mode:
cbc}}' http://localhost:9311/v1/secrets
{secret_ref: 
http://localhost:9311/v1/secrets/84aaac35-daa9-4ffb-b03e-18596729705d}

curl -X PUT -H 'content-type:application/json' -H 'X-Project-Id: 12345' -d
'{secret: {payload: secretput, payload_content_type: text/plain
}}' http://localhost:9311/v1/secrets
*{code: 405, description: , title: Method Not Allowed}*

It would be great if some one could help me .
Thanks in advance.

-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] Invalid import in tempest v2 api tests

2015-03-20 Thread Vijay Venkatachalam
Hi:

The LBaaS API tests are failing to run because test_pools.py(and other tests as 
well) are importing data_utils  from tempest.common.utils.

Looks like data_utils is moved to tempest_lib now and the API tests need to 
change to import from tempest_lib.

Is someone tracking this?

We are blocked in bringing up a CI for netscaler.

Also, we are thinking of running lbaas CLI tests as part of the CI.

Any suggestions/comments?


Thanks
Vijay


Sent from Surface

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] FFE request for Firewall Router insertion BP

2015-03-20 Thread Abishek Subramanian (absubram)
Dear Horizon community,

I would like to request an FFE for the review we have out currently for
the firewall feature in the project dashboard.

The review is at - https://review.openstack.org/#/c/162552/


This feature is very important for the neutron FWaaS community to move the
firewall feature from out of experimental stage.
It allows for firewalls to finally be applied to a router and having
Horizon support for this will greatly enhance the ability to use FWaaS.
The neutron side of the feature got merged just before the K-3 deadline
and there isn¹t a neutron client dependency on Horizon for this.

A final review version to address comments about UT and also a details
page will be made very shortly.
The review in its current state is ready to test and for anyone to try
out. Please make sure to include the q-fwaas service in your local.conf if
testing.


Requesting Akihiro and David to please sponsor reviews (as discussed on
IRC).

Much thanks and kind regards,

Abishek


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About Sahara EDP New Ideas for Liberty

2015-03-20 Thread Andrew Lazarev
Hi Weiting,

1. Add a schedule feature to run the jobs on time:
This request comes from the customer, they usually run the job in a
specific time every day. So it should be great if there
 is a scheduler to help arrange the regular job to run.
Looks like a great feature. And should be quite easy to implement. Feel
free to create spec for that.

2. A more complex workflow design in Sahara EDP:
Current EDP only provide one job that is running on one cluster.
Yes. And ability to run several jobs in one oozie workflow is discussed on
every summit (e.g. 'coordinated jobs' at
https://etherpad.openstack.org/p/kilo-summit-sahara-edp). But for now it
was not a priority

But in a real case, it should be more complex, they usually use multiple
jobs to calculate the data and may use several different type clusters to
process it..
It means that workflow manager should be on Sahara side. Looks like a
complicated feature. But we would be happy to help with designing and
implementing it. Please file proposal for design session on ongoing summit.
Are you going to Vancouver?

Another concern is about Spark, for Spark it cannot use Oozie to do this.
So we need to create an abstract layer to help to implement this kind of
scenarios.
If workflow is on Sahara side it should work automatically for all engines.

Thanks,
Andrew.



On Sun, Mar 8, 2015 at 3:17 AM, Chen, Weiting weiting.c...@intel.com
wrote:

  Hi all.



 We got several feedbacks about Sahara EDP’s future from some China
 customers.

 Here are some ideas we would like to share with you and need your input if
 we can implement them in Sahara(Liberty).



 1. Add a schedule feature to run the jobs on time:

 This request comes from the customer, they usually run the job in a
 specific time every day. So it should be great if there is a scheduler to
 help arrange the regular job to run.



 2. A more complex workflow design in Sahara EDP:

 Current EDP only provide one job that is running on one cluster.

 But in a real case, it should be more complex, they usually use multiple
 jobs to calculate the data and may use several different type clusters to
 process it.

 For example: Raw Data - Job A(Cluster A) - Job B(Cluster B) - Job
 C(Cluster A) - Result

 Actually in my opinion, this kind of job could be easy to implement by
 using Oozie as a workflow engine. But for current EDP, it doesn’t implement
 this kind of complex case.

 Another concern is about Spark, for Spark it cannot use Oozie to do this.
 So we need to create an abstract layer to help to implement this kind of
 scenarios.



 However, any suggestion is welcome.

 Thanks.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.utils] allow strutils.mask_password to mask keys dynamically

2015-03-20 Thread Doug Hellmann
Excerpts from Matthew Van Dijk's message of 2015-03-20 15:06:08 +:
 I’ve come across a use case for allowing dynamic keys to be made
 secret. The hardcoded list is good for common keys, but there will be
 cases where masking a custom value is useful without having to add it
 to the hardcoded list.

Can you be more specific about what that case is?

My concern with making some keys optional is that we'll have different
security behavior in different apps, because some will mask values
that are not masked in other places. Part of the point of centralizing
behaviors like this is to keep them consistent across all of the
projects.

 I propose we add an optional parameter that is a list of secret_keys
 whose values will be masked.
 There is concern that this will lead to differing levels of security.
 But I disagree as either the message will be masked before passing on
 or mask_password will be called. In this case the developer should be
 aware of the incoming data and manually mask it.
 Keeping with a hardcoded list discourages use of the function.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-20 Thread Armando M.
On 19 March 2015 at 23:59, Akihiro Motoki amot...@gmail.com wrote:

 Forwarding my reply to the other thread too...

 Multiple threads on the same topic is confusing.
 Can we use this thread if we continue the discussion?
 (The title of this thread looks approapriate)

 
 API extension is the only way that users know which features are
 available unitl we support API microversioning (v2.1 or something).
 I believe VLAN transparency support should be implemented as an
 extension, not by changing the core resources attribute directly.
 Otherwise users (including Horizon) cannot know we field is available or
 not.

 Even though VLAN transparency and MTU suppotrs are basic features, it
 is better to be implemented as an extension.
 Configuration does not help from API perspective as it is not visible
 through the API.


I was only suggesting the configuration-based approach because it was
simpler and it didn't lead to the evil mixin business. Granted it does not
help from the API perspective, but we can hardly claim good discoverability
of the API capabilities anyway :)

That said, I'd be ok with moving one or both of these attributes to the
extension framework. I thought that consensus on having them as core
resources had been reached at the time the spec proposal.



 We are discussing moving away from extension attributes as Armando
 commented,
 but I think it is discussed about resources/attributes which are
 already used well and required.
 It looks natural to me that new resources/attributes are implemented
 via an extension.
 The situation may be changed once we have support of API microversioning.
 (It is being discussed in the context of Nova API microvesioning in
 the dev list started by Jay Pipes.)

 In my understanding, the case of IPv6 two mode is an exception.
 From the initial design we would like to have fully support of IPv6 in
 subnet resource,
 but through the discussion of IPv6 support it turns out some more
 modes are required,
 and we decided to change the subnet core resource. It is the exception.

 Thanks,
 Akihiro

 2015-03-20 8:23 GMT+09:00 Armando M. arma...@gmail.com:
  Forwarding my reply to the other thread here:
 
  
 
  If my memory does not fail me, changes to the API (new resources, new
  resource attributes or new operations allowed to resources) have always
 been
  done according to these criteria:
 
  an opt-in approach: this means we know the expected behavior of the
 plugin
  as someone has coded the plugin in such a way that the API change is
  supported;
  an opt-out approach: if the API change does not require explicit backend
  support, and hence can be deemed supported by all plugins.
  a 'core' extension (ones available in neutron/extensions) should be
  implemented at least by the reference implementation;
 
  Now, there might have been examples in the past where criteria were not
 met,
  but these should be seen as exceptions rather than the rule, and as such,
  fixed as defects so that an attribute/resource/operation that is
  accidentally exposed to a plugin will either be honored as expected or an
  appropriate failure is propagated to the user. Bottom line, the server
 must
  avoid to fail silently, because failing silently is bad for the user.
 
  Now both features [1] and [2] violated the opt-in criterion above: they
  introduced resources attributes in the core models, forcing an
 undetermined
  behavior on plugins.
 
  I think that keeping [3,4] as is can lead to a poor user experience; IMO
  it's unacceptable to let a user specify the attribute, and see that
  ultimately the plugin does not support it. I'd be fine if this was an
  accident, but doing this by design is a bit evil. So, I'd suggest the
  following, in order to keep the features in Kilo:
 
  Patches [3, 4] did introduce config flags to control the plugin behavior,
  but it looks like they were not applied correctly; for instance, the
  vlan_transparent case was only applied to ML2. Similarly the MTU config
 flag
  was not processed server side to ensure that plugins that do not support
  advertisement do not fail silently. This needs to be rectified.
  As for VLAN transparency, we'd need to implement work item 5 (of 6) of
 spec
  [2], as this extension without at least a backend able to let tagged
 traffic
  pass doesn't seem right.
  Ensure we sort out the API tests so that we know how the features behave.
 
  Now granted that controlling the API via config flags is not the best
  solution, as this was always handled through the extension mechanism, but
  since we've been talking about moving away from extension attributes with
  [5], it does sound like a reasonable stop-gap solution.
 
  Thoughts?
  Armando
 
  [1]
 
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
  [2]
 
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html
  [3]
 
 

Re: [openstack-dev] [ceilometer] Pipeline for notifications does not seem to work

2015-03-20 Thread Igor Degtiarov
Hi Tim

I've check your case on my devstack. And I've received new hs06 meter
in my meter list.

So something wrong with your local env.


Cheers,
Igor D.
Igor Degtiarov
Software Engineer
Mirantis Inc
www.mirantis.com


On Fri, Mar 20, 2015 at 5:40 PM, Tim Bell tim.b...@cern.ch wrote:


 I’m running Juno with ceilometer and trying to produce a new meter which is
 based on vcpus * F (where F is a constant that is different for each
 hypervisor).



 When I create a VM, I get a new sample for vcpus.



 However, it does not appear to fire the transformer.



 The same approach using “cpu” works OK but this one is polling on a regular
 interval rather than a one off notification when the VM is created.



 Any suggestions or alternative approaches for how to get a sample based the
 number of cores scaled by a fixed constant?



 Tim



 In my pipeline.yaml sources,



 - name: vcpu_source

   interval: 180

   meters:

   - vcpus

   sinks:

   - hs06_sink



 In my transformers, I have



 - name: hs06_sink

   transformers:

   - name: unit_conversion

 parameters:

 target:

 name: hs06

 unit: HS06

 type: gauge

 scale: 47.0

   publishers:

   - notifier://








 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Infra] [cinder] Could you please re-consider Oracle ZFS/SA Cinder drivers (iSCSI and NFS)

2015-03-20 Thread Alka Deshpande

Hi MIke,

My team and I would like to respectfully request a revert for the change 
at:https://review.openstack.org/#/c/165939/ to bring back the ZFSSA 
drivers to Kilo. Oracle has the CI working internally and we are in the 
process of going through internal security review, which is expected to 
be done by the end of March.


In the mean time, we have gotten permission to post test results to an 
external webserver. We will have results reported by the end of the 
week. If need be, we can show proof of  CI running internally by pasting 
results to pastebin: http://paste.openstack.org/show/193964/


We have been working diligently on setting up the CI system, and we have 
been keeping you updated since the mid-cycle meetup in Austin. We really 
appreciate your consideration.  We do take CI setup effort very 
seriously and striving to get it to the level of your satisfaction.


Thanks for your time and consideration.

-Alka Deshpande
(Sr. Software Development Manager,
 Oracle Corporation)






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Nominate Igor Malinovskiy for core team

2015-03-20 Thread Valeriy Ponomaryov
+1 from me.


On 03/18/2015 03:04 PM, Ben Swartzlander wrote:

 Igor (u_glide on IRC) joined the Manila team back in December and has
 done a consistent amount of reviews and contributed significant new core
 features in the last 2-3 months. I would like to nominate him to join the
 Manila core reviewer team.

 -Ben Swartzlander
 Manila PTL


 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][FFE] - Reseller Implementation

2015-03-20 Thread Raildo Mascena
Hi Geoff,

I'm very happy to know that companies like Cisco wants to use Reseller.

When we start the Hierarchical Multitenancy implementation we had some use
cases in mind, there is:

- Organize a divisional department of a company.
- Reseller
- Merge/Acquisition
- Contracting parties

The first use case are done with the implementation with HMT in Kilo-1,
here we are requesting the FFE for Reseller and we want to implement the
other use cases in a near future.

These use cases are more focused in the Keystone side, but I believe that
we can expand this feature for the other services, like we are trying to do
implementing Nested Quotas in Nova
https://github.com/openstack/nova-specs/blob/master/specs/kilo/approved/nested-quota-driver-api.rst
(and in other services that have quotas control in Liberty). We are working
to add the HMT support in Horizon.

I like your use cases and we need a Design Session in the next summit,
maybe a cross project session, to define the next steps for Reseller.

Any questions I'm available.

Cheers,

Raildo

On Fri, Mar 20, 2015 at 3:48 PM Geoff Arnold ge...@geoffarnold.com wrote:

 Glad to see this FFE. The Cisco Cloud Services team is very interested in
 the Reseller use case, and in a couple of possible extensions of the work.
 http://specs.openstack.org/openstack/keystone-specs/
 specs/kilo/reseller.html covers the Keystone use cases, but there are
 several other developments required in other OpenStack projects to come up
 with a complete reseller “solution”. For my information, has anyone put
 together an overarching blueprint which captures the top level Reseller use
 cases and identifies all of the sub-projects and their dependencies? If so,
 wonderful. If not, we might try to work on this in the new Product
 Management WG.

 I mentioned “extensions” to http://specs.openstack.org/
 openstack/keystone-specs/specs/kilo/reseller.html . There are two that
 we’re thinking about:
 - the multi-provider reseller: adding the user story where Martha buys
 OpenStack services from two or more
   providers and presents them to her customers through a single Horizon
 instance
 - stacked reseller: Martha buys OpenStack services from a provider, Alex,
 and also from a reseller, Chris, who
   purchases OpenStack services from multiple providers

 In each case, the unit of resale is a “virtual region”: a provider region
 subsetted using HMT/domains, with IdP supplied by the reseller, and
 constrained by resource consumption policies (e.g. Nova AZ “foo” is not
 available to customers of reseller “bar”).

 I strongly doubt that any of this is particularly original, but I haven’t
 seen it written up anywhere.

 Cheers,

 Geoff Arnold
 Cisco Cloud Services
 geoar...@cisco.com
 ge...@geoffarnold.com
 @geoffarnold

 On Mar 19, 2015, at 11:22 AM, Raildo Mascena rail...@gmail.com wrote:

 In addition,

 In the last keystone meeting in March 17 in the IRC channel
 http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2015-03-17.log
  we
 decided that Henry Nash (keystone core) will sponsor this feature as a FFE.

 Cheers,

 Raildo

 On Tue, Mar 17, 2015 at 5:36 PM Raildo Mascena rail...@gmail.com wrote:

 Hi Folks,

 We’ve discussed a lot in the last Summit about the Reseller use case.
 OpenStack needs to grow support for hierarchical ownership of objects.This
 enables the management of subsets of users and projects in a way that is
 much more comfortable for private clouds, besides giving to public cloud
 providers the option of reselling a piece of their cloud.

 More detailed information can be found in the spec for this change at:
 https://review.openstack.org/#/c/139824

 The current code change for this is split into 8 patches (to make it
 easier to review). We currently have 7 patches in code review and we are
 finishing the last one.

 Here is the workflow of our patches:

 1- Adding a field to enable the possibility to have a project with the
 domain feature: https://review.openstack.org/#/c/157427/

 2- Change some constraints and create some options to list projects (for
 is_domain flag, for parent_id):
 https://review.openstack.org/#/c/159944/
 https://review.openstack.org/#/c/158398/
 https://review.openstack.org/#/c/161378/
 https://review.openstack.org/#/c/158372/

 3- Reflect domain operations to project table, mapping domains to
 projects that have the is_domain attribute set to True. In addition, it
 changes the read operations to use only the project table. Then, we will
 drop the Domain Table.
 https://review.openstack.org/#/c/143763/
 https://review.openstack.org/#/c/161854/ (Only patch with work in
 progress)

 4- Finally, the inherited role will not be applied to a subdomain and its
 sub hierarchy. https://review.openstack.org/#/c/164180/

 Since we have the implementation almost completed, waiting for code
 review, I am requesting an FFE to enable the implementation of this last
 patch and work to have this implementation merged in Kilo.


 

[openstack-dev] [Murano] Feature Freeze Exceptions

2015-03-20 Thread Serg Melikyan
Hi, folks!

Today we released third milestone of Kilo release - 2015.1.0b3 [1], in
this milestone we completed 11 blueprints and fixed 13 bugs. We
finally completed a number of big features that we were working whole
cycle: Integration with Congress, Environment Templates and so on.

Release of kilo-3 means that our Feature Freeze kicks in, and we still
have some features that are not completed yet. Following features are
marked as Feature Freeze exceptions and will be completed during RC
cycle:

1. Support for configuration languages (puppet, chef) [2]
2. Migrate to YAQL 1.0 [3]
3. Murano Versioning [4]

We also need to finish working on our specifications, cause many
specifications are still on review, even as features are already
completed. In this cycle we introduced murano-specs [5] repository in
experimental mode, but starting from the next cycle we are completely
migrating to murano-specs model.

P.S. Feature Freeze also means that we are shifting focus from feature
development to testing Murano as much as possible.

References:
[1] https://launchpad.net/murano/+milestone/kilo-3
[2] https://blueprints.launchpad.net/murano/+spec/conf-language-support
[3] https://blueprints.launchpad.net/murano/+spec/migrate-to-yaql-vnext
[4] https://blueprints.launchpad.net/murano/+spec/murano-versioning
[5] http://git.openstack.org/cgit/stackforge/murano-specs/
-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty specs are now open

2015-03-20 Thread Davanum Srinivas
Neil,

yes, see example - https://review.openstack.org/#/c/155116/ - you file
against the /approved directory

-- dims

On Fri, Mar 20, 2015 at 2:41 PM, Neil Jerram neil.jer...@metaswitch.com wrote:
 Neil Jerram neil.jer...@metaswitch.com writes:

 Hi Michael,

 Michael Still mi...@stillhq.com writes:

 For specs approved in Juno or Kilo, there is a fast track approval
 process for Liberty. The steps to get your spec re-approved are:

  - Copy your spec from the specs/oldrelease/approved directory to
 the specs/liberty/approved directory. [...]

 Perhaps I'm misunderstanding, or have failed to pull the latest
 nova-specs repo correctly, but I don't see a specs/liberty/approved
 directory.  Do you mean just specs/liberty ?

 Ah, I guess the intention may be that a proposed spec patch would
 _create_ the specs/liberty/approved directory.  Is that right?

Neil


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty specs are now open

2015-03-20 Thread Neil Jerram
Davanum Srinivas dava...@gmail.com writes:

 Neil,

 yes, see example - https://review.openstack.org/#/c/155116/ - you file
 against the /approved directory

Thanks!
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Carl Baldwin
On Fri, Mar 20, 2015 at 1:09 PM, Dean Troyer dtro...@gmail.com wrote:
 Template is totally the wrong word.  It is a host address without a network.
 The prefix is there for the same purpose, to OR it back into a network
 address.

 I just want us to stop inventing things that already exist.  You want to
 specify the gateway IP, to get that you need a network address, presumably
 to be allocated somewhere and a host address.  OR them together and you have
 an IP address.

I'm not sure template is the wrong word.  But, I think we're just
arguing terminology now.  To me, calling it a template indicates that
it must be combined with something else before it is usable for our
purpose.  Here are some options for what to call the attribute:

gateway_ip_template: '0.0.0.1'
gateway_ip_host:  '0.0.0.1'
gateway_ip_host_part:  '0.0.0.1'

I'm sure there are 100 other names we could use.  The key take-aways
for me are that we don't use '*.*.*.1' and that we don't pass the
host-only part of the address as the 'gateway_ip'.  So, these are
wrong IMO:

gateway_ip:  '0.0.0.1'
gateway_ip_template: '*.*.*.1'
gateway_ip_host_port: '*.*.*.1'

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-20 Thread Carl Baldwin
On Fri, Mar 20, 2015 at 12:31 PM, Jay Pipes jaypi...@gmail.com wrote:
 This is a question purely out of curiousity. Why is Neutron averse to the
 concept of using tenants as natural ways of dividing up the cloud -- which
 at its core means multi-tenant, on-demand computing and networking?

From what I've heard others say both in this thread and privately to
me, there are already a lot of cases where a tenant will use the same
address range to stamp out identical topologies.  It occurred to me
that we might even being doing this with our own gate infrastructure
but I don't know for sure.

 Is this just due to a lack of traditional use of the term in networking
 literature? Or is this something more deep-grained (architecturally) than
 that?

We already have NAT serving as the natural divider between them and so
there is no reason to create another artificial way of dividing them
up which will force them to change their practices.  I've come to
terms with this since my earlier replies to this thread.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Jay Pipes

On 03/20/2015 03:02 PM, Carl Baldwin wrote:

On Fri, Mar 20, 2015 at 12:18 PM, Jay Pipes jaypi...@gmail.com wrote:

2) Is the action of creating a subnet from a pool better realized as a
different way of creating a subnet, or should there be some sort of
pool action? Eg.:

POST /subnet_pools/my_pool_id/subnet
{'prefix_len': 24}

which would return a subnet response like this (note prefix_len might
not be needed in this case)

{'id': 'meh',
   'cidr': '192.168.0.0/24 http://192.168.0.0/24',
   'gateway_ip': '192.168.0.1',
   'pool_id': 'my_pool_id'}

I am generally not a big fan of RESTful actions. But in this case the
semantics of the API operation are that of a subnet creation from within
a pool, so that might be ok.


+1 to using resource subcollection semantics here.


The issue I see here is that there is a window of time between
requesting the subnet allocation and creating the subnet when the
subnet could be taken by someone else.  We need to acknowledge the
window and address it somehow.


I actually don't think the API URI structure should acknowledge if there 
is or is not a window of time that is involved in some action. Instead, 
whether or not the API call returns a 202 Accepted or a 201 Created 
should be sufficient for communicating that information to the API user.



Does IPAM hold a reservation or something on the subnet to lock out
others?  Or, does the client just retry on failure?  If there are
multiple clients requesting subnet allocations, it seems that IPAM
should keep some state (e.g. a reservation) to avoid giving out the
same one more than once to difference clients at least.


Any API that returns 202 Accepted must return information in the HTTP 
headers (Location: URI) about where the client can get an update on 
the status of the resource that should be created:


https://github.com/openstack/api-wg/blob/master/guidelines/http.rst#2xx-success-codes

Whether or not this mechanism returns a reservation resource link 
(something like /reservations/{res_id}), or a link to the resource 
itself (/subnets/{subnet_id}) is entirely implementation-dependent.


I personally prefer the latter, but could go either way.


I think that the first operation should result in full allocation of
the subnet to the tenant.  In this case, I think that the allocation
should have an id and be a first class object (it is not currently).
The tenant will need to manage these allocations like anything else.
The tenant will also be required to delete unused allocations.  This
might be the way to go long-term.


In this case, you are suggesting to make the REST API operation 
synchronous, and should use 201 Created.


There's no reason you couldn't support both the top-level and the 
subcollection resource method of creating the subnet, though.


For instance, these two API calls would essentially be the same:

POST /subnets
{
  'pool_id': 'some_pool',
  'network_id': 'some_network',
  'cidr': '192.168.0.0./24'
}

POST /subnetpools/some_pool/subnets

{
  'network_id': 'some_network',
  'cidr': '192.168.0.0./24'
}

And the above is totally fine, IMO.

Best,
-jay


If allocations have an id.  I think I'd have the client pass in the
allocation id instead of the pool id to the subnet create to
differentiate between asking for a new allocation and using an
existing allocation.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-20 Thread Jay Pipes

On 03/20/2015 03:37 PM, Carl Baldwin wrote:

On Fri, Mar 20, 2015 at 12:31 PM, Jay Pipes jaypi...@gmail.com wrote:

This is a question purely out of curiousity. Why is Neutron averse to the
concept of using tenants as natural ways of dividing up the cloud -- which
at its core means multi-tenant, on-demand computing and networking?


 From what I've heard others say both in this thread and privately to
me, there are already a lot of cases where a tenant will use the same
address range to stamp out identical topologies.  It occurred to me
that we might even being doing this with our own gate infrastructure
but I don't know for sure.


Is this just due to a lack of traditional use of the term in networking
literature? Or is this something more deep-grained (architecturally) than
that?


We already have NAT serving as the natural divider between them and so
there is no reason to create another artificial way of dividing them
up which will force them to change their practices.  I've come to
terms with this since my earlier replies to this thread.


OK, thanks for the info, Carl, appreciated.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-20 Thread Neil Jerram
Assaf Muller amul...@redhat.com writes:

 Hello everyone,

Hi Assaf,

 The use_namespaces option in the L3 and DHCP Neutron agents controls if you
 can create multiple routers and DHCP networks managed by a single L3/DHCP 
 agent,
 or if the agent manages only a single resource.

 Are the setups out there *not* using the use_namespaces option? I'm curious as
 to why, and if it would be difficult to migrate such a setup to use 
 namespaces.

 I'm asking because use_namespaces complicates Neutron code for what I gather
 is an option that has not been relevant for years. I'd like to deprecate the 
 option
 for Kilo and remove it in Liberty.

I'm not clear what you're proposing.  After the option is removed, will
the code always behave as it used to when use_namespaces was False, or
as when it was True?

FWIW, my project Calico [1] uses a modified Neutron DHCP agent, where
the behaviour for use_namespaces = False is closer to what we need.  So
we effectively arrange to ignore the use_namespaces setting, and behave
as though it was False [2].

However, that isn't the only change we need, and it's also not clear
that patching the Neutron DHCP agent in this way (or looking at
upstreaming such a patch) will be our long term approach.  Hence this
case probably shouldn't be a significant one for deciding on your
proposal.

Regards,
Neil

[1] http://www.projectcalico.org/
[2] 
https://github.com/Metaswitch/calico-neutron/commit/af2f613368239e2a86b6312bae6e5e70a53d1396

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-20 Thread Neil Jerram
Assaf Muller amul...@redhat.com writes:

 Hello everyone,

Hi Assaf,

 The use_namespaces option in the L3 and DHCP Neutron agents controls if you
 can create multiple routers and DHCP networks managed by a single L3/DHCP 
 agent,
 or if the agent manages only a single resource.

 Are the setups out there *not* using the use_namespaces option? I'm curious as
 to why, and if it would be difficult to migrate such a setup to use 
 namespaces.

 I'm asking because use_namespaces complicates Neutron code for what I gather
 is an option that has not been relevant for years. I'd like to deprecate the 
 option
 for Kilo and remove it in Liberty.

I'm not clear what you're proposing.  After the option is removed, will
the code always behave as it used to when use_namespaces was False, or
as when it was True?

FWIW, my project Calico [1] uses a modified Neutron DHCP agent, where
the behaviour for use_namespaces = False is closer to what we need.  So
we effectively arrange to ignore the use_namespaces setting, and behave
as though it was False [2].

However, that isn't the only change we need, and it's also not clear
that patching the Neutron DHCP agent in this way (or looking at
upstreaming such a patch) will be our long term approach.  Hence this
case probably shouldn't be a significant one for deciding on your
proposal.

Regards,
Neil

[1] http://www.projectcalico.org/
[2] 
https://github.com/Metaswitch/calico-neutron/commit/af2f613368239e2a86b6312bae6e5e70a53d1396

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-20 Thread Jeremy Stanley
On 2015-03-20 13:37:49 -0600 (-0600), Carl Baldwin wrote:
 From what I've heard others say both in this thread and privately to
 me, there are already a lot of cases where a tenant will use the same
 address range to stamp out identical topologies.  It occurred to me
 that we might even being doing this with our own gate infrastructure
 but I don't know for sure.
[...]

We don't really. I mean we do reuse identical network configurations
for jobs running in parallel but these exist solely within the
nested OpenStack environments (either entirely local to a worker or
in some cases bridged through a GRE tunnel between workers).

Now that's for the test infrastructure specifically... if the
networks for a Tempest-created tenant in DevStack on one of those
workers duplicates a CIDR I wouldn't necessarily know, since I don't
pay particularly close attention to DevStack/Tempest configurations.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Pipeline for notifications does not seem to work

2015-03-20 Thread gordon chung
i can confirm it works for me as well... are there any noticeable errors in the 
ceilometer-agent-notifications log? the snippet below looks sane to me though.

cheers,
gord


 From: idegtia...@mirantis.com
 Date: Fri, 20 Mar 2015 18:35:56 +0200
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [ceilometer] Pipeline for notifications does not 
 seem to work
 
 Hi Tim
 
 I've check your case on my devstack. And I've received new hs06 meter
 in my meter list.
 
 So something wrong with your local env.
 
 
 Cheers,
 Igor D.
 Igor Degtiarov
 Software Engineer
 Mirantis Inc
 www.mirantis.com
 
 
 On Fri, Mar 20, 2015 at 5:40 PM, Tim Bell tim.b...@cern.ch wrote:
 
 
  I’m running Juno with ceilometer and trying to produce a new meter which is
  based on vcpus * F (where F is a constant that is different for each
  hypervisor).
 
 
 
  When I create a VM, I get a new sample for vcpus.
 
 
 
  However, it does not appear to fire the transformer.
 
 
 
  The same approach using “cpu” works OK but this one is polling on a regular
  interval rather than a one off notification when the VM is created.
 
 
 
  Any suggestions or alternative approaches for how to get a sample based the
  number of cores scaled by a fixed constant?
 
 
 
  Tim
 
 
 
  In my pipeline.yaml sources,
 
 
 
  - name: vcpu_source
 
interval: 180
 
meters:
 
- vcpus
 
sinks:
 
- hs06_sink
 
 
 
  In my transformers, I have
 
 
 
  - name: hs06_sink
 
transformers:
 
- name: unit_conversion
 
  parameters:
 
  target:
 
  name: hs06
 
  unit: HS06
 
  type: gauge
 
  scale: 47.0
 
publishers:
 
- notifier://
 
 
 
 
 
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Unable to send PUT request to store the secret

2015-03-20 Thread Asha Seshagiri
Hi All ,

Now the command for put request  has been successful .the content type for
the header needs to be text/plain.
I thought that the  datatype for the data parameters would determine the
content type of the header.

For ex : In this case the data is passed in the following format   *'{secret:
{payload: secretput, payload_content_type: text/plain }}' which is
the JSON type .*

curl -X PUT -H 'content-type:text/plain' -H 'X-Project-Id: 12345' -d
*'{secret:
{payload: secretput, payload_content_type: text/plain }}' *
http://localhost:9311/v1/secrets/89d424c3-f4c1-4822-8bd7-7691f40f7ba3

Could anyone provide the clarity on content type of the header

Thanks and Regards,
Asha Seshagiri

On Fri, Mar 20, 2015 at 2:05 PM, Asha Seshagiri asha.seshag...@gmail.com
wrote:

 Hi All ,

 I am unable to send the PUT request using the CURL command for storing the
 secret .


 root@barbican:~# curl -X POST -H 'content-type:application/json' -H
 'X-Project-Id: 12345' -d '{secret: {name: secretname, algorithm:
 aes, bit_length  : 256, mode:
 cbc}}' http://localhost:9311/v1/secrets
 {secret_ref: 
 http://localhost:9311/v1/secrets/84aaac35-daa9-4ffb-b03e-18596729705d}

 curl -X PUT -H 'content-type:application/json' -H 'X-Project-Id: 12345' -d
 '{secret: {payload: secretput, payload_content_type: text/plain
 }}' http://localhost:9311/v1/secrets
 *{code: 405, description: , title: Method Not Allowed}*

 It would be great if some one could help me .
 Thanks in advance.

 --
 *Thanks and Regards,*
 *Asha Seshagiri*




-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-20 Thread Assaf Muller


- Original Message -
 Assaf Muller amul...@redhat.com writes:
 
  Hello everyone,
 
 Hi Assaf,
 
  The use_namespaces option in the L3 and DHCP Neutron agents controls if you
  can create multiple routers and DHCP networks managed by a single L3/DHCP
  agent,
  or if the agent manages only a single resource.
 
  Are the setups out there *not* using the use_namespaces option? I'm curious
  as
  to why, and if it would be difficult to migrate such a setup to use
  namespaces.
 
  I'm asking because use_namespaces complicates Neutron code for what I
  gather
  is an option that has not been relevant for years. I'd like to deprecate
  the option
  for Kilo and remove it in Liberty.
 
 I'm not clear what you're proposing.  After the option is removed, will
 the code always behave as it used to when use_namespaces was False, or
 as when it was True?

I'm sorry, I should have specified: I propose to remove the option and
keep the default behavior, which is True.

 
 FWIW, my project Calico [1] uses a modified Neutron DHCP agent, where
 the behaviour for use_namespaces = False is closer to what we need.  So
 we effectively arrange to ignore the use_namespaces setting, and behave
 as though it was False [2].
 
 However, that isn't the only change we need, and it's also not clear
 that patching the Neutron DHCP agent in this way (or looking at
 upstreaming such a patch) will be our long term approach.  Hence this
 case probably shouldn't be a significant one for deciding on your
 proposal.
 
 Regards,
 Neil
 
 [1] http://www.projectcalico.org/
 [2]
 https://github.com/Metaswitch/calico-neutron/commit/af2f613368239e2a86b6312bae6e5e70a53d1396
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][congress] Stack lifecycle plugpoint as an enabler for cloud provider's services

2015-03-20 Thread Zane Bitter

On 19/03/15 06:17, VACHNIS, AVI (AVI) wrote:

Hi,

I'm looking at this interesting blueprint 
https://blueprints.launchpad.net/heat/+spec/stack-lifecycle-plugpoint and I 
hope you can easily clarify some things to me.
I see the following statements related to this BP:
* [in problem description section]: There are at least two primary use cases. (1) 
Enabling holistic (whole-pattern) scheduling of the virtual resources in a template instance 
(stack) prior to creating or deleting them. This would usually include making decisions about where 
to host virtual resources in the physical infrastructure to satisfy policy requirements. 
* [in proposed change section]: Pre and post operation methods should not modify 
the parameter stack(s). Any modifications would be considered to be a bug. 
* [in Patch set 7 comment by Thomas]: Are the plug-ins allowed to modify the stack? 
If yes, what parts can they modify? If not, should some code be added to restrict 
modifications?
* [in Patch set 8 comment by Bill] : @Thomas Spatzier, The cleanest approach would 
be to not allow changes to the stack parameter(s). Since this is cloud-provider-supplied 
code, I think that it is reasonable to not enforce this restriction, and to treat any 
modifications of the stack parameter(s) as a defect in the cloud provider's extension 
code.


I think you're asking the wrong question here; it isn't a question of 
what is _allowed_. The plugin runs in the same memory space as 
everything else in Heat. It's _allowed_ to do anything that is possible 
from Python code. The question for anyone writing a plugin is whether 
that's smart.


In terms of guarantees, we can't offer any at all since we don't have 
any plugins in-tree and participating in continuous integration testing.


The plugin interface itself should be considered stable (i.e. there 
would be a long deprecation cycle for any backward-incompatible 
changes), and if anyone brought an accidental breakage to our attention 
I think it would be cause for a revert or fix.


The read-only behaviour of the arguments passed to the plugin as 
parameters (e.g. the Stack object) is not considered stable. In practice 
it tends to change relatively slowly, but there's much less attention 
paid to not breaking this for lifecycle plugins than there is e.g. for 
Resource plugins.


Finally, the behaviour on write of the arguments is not only not 
considered stable, but support for it even working once is explicitly 
disclaimed. You are truly on your own if you try this.



 From the problem description one might understand it's desired to allow modification of resource placement 
(i.e. making decisions where to host...) by cloud provider plug-point code. Does should not 
modify the parameter stack blocks this desired capability? Or is it just a rule not to 
touch original parameters' values but still allow modifying, let's say availability_zone property 
as long it's not effected by stack parameters?


I don't think the word 'parameter' there refers to the user-supplied 
template parameters, it refers to the formal parameter of the plugin's 
do_p[re|ost]_op() method named 'stack'.


On the availability zone thing specifically, I think the way forward is 
to give cloud operators a more sophisticated way of selecting the AZ 
when the user doesn't specify one (i.e. just requests the default). That 
could happen inside Heat, but it would probably be more general if it 
happened in Nova.


You might already be aware of another blueprint that's being worked on 
for distributing scaling group members among AZs. I haven't caught up 
with the latest discussion on that, but I think at some point in the 
future we'll want to make that selection pluggable so that operators can 
have their own schedulers make the decisions there. However, that would 
be a separate plugin interface to the lifecycle plugins.



In case the original purpose of plugpoint mechanism doesn't allow changing the 
stack, I'd suggest letting the user creating the stack, explicitly 'grant' the 
cloud provider to enhance his stack characteristics by enabling cloud 
provider's extra services.
By 'explicitly grant' I thought on introducing a sort of a Policy resource type 
that the stack creator will be able to express his desired services he want to 
leverage.


Whether the user has given the cloud provider permission is really the 
least of the worries.



In case such grant appears in the stack, the plug-point code is allowed to 
modify the stack to provide the desired service.


Again, it's not a matter of being allowed. It's a matter of whether the 
community would freeze future development (e.g. convergence would have 
to be cancelled) in order to maintain complete internal API stability 
(we won't), or whether you're prepared to pay someone to constantly 
maintain your plugin as internal Heat changes constantly break it (I 
wouldn't, but it's up to you).



I guess it may be a possible enabler to Congress' policies as well.


That 

Re: [openstack-dev] [Heat] vnic_type in OS::Neutron::Port

2015-03-20 Thread Zane Bitter

On 20/03/15 14:33, Rob Pothier (rpothier) wrote:


Hi All,

It was brought to my attention that the recent changes with the vnic_type
possibly should not include the colon in the property value.
In the earlier versions of the review the colon was not in the property,
and was appended later.

https://review.openstack.org/#/c/129353/4..5/heat/engine/resources/neutron/port.py

let me know if this should go back to version 4, and I will open a bug
and fix it.


I did some testing and confirmed that the YAML parser handles it without 
issue, though it does look a bit weird. I guess we need a policy on 
extension APIs in general, but after consideration using a colon is 
probably not a terrible way to denote it.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Carl Baldwin
On Fri, Mar 20, 2015 at 1:07 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 03/20/2015 02:51 PM, Carl Baldwin wrote:

 On Fri, Mar 20, 2015 at 12:18 PM, Jay Pipes jaypi...@gmail.com wrote:

 What about this instead?

 POST /v2.0/subnets

 {
'network_id': 'meh',
'gateway_ip_template': '*.*.*.1'
'prefix_len': 24,
'pool_id': 'some_pool'
 }

 At least that way it's clear the gateway attribute is not an IP, but a
 template/string instead?


 I thought about doing *s but in the world of Classless Inter-Domain
 Routing where not all networks are /24, /16, or /8 it seemed a bit
 imprecise.  But, maybe that doesn't matter.


 Understood.

 I think the more important difference with your proposal here is that
 it is passed as a new attribute called 'gateway_ip_template'.  I don't
 think that attribute would ever be sent back to the user.  Is it ok to
 have write-only attributes?  Is everyone comfortable with that?


 I don't see anything wrong with attributes that are only in the request. I
 mean, we have attributes that are only in the response (things like status,
 for example).

 Looking at the EC2 API, they support write-only attributes as well, for
 just this purpose:

 http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html

 The MaxCount and MinCount attributes are not in the response but are in the
 request. Same thing for Nova's POST /servers REST API (min_count,
 max_count).

Makes sense.  I think I like this gateway_ip_template attribute then
for this purpose.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Carl Baldwin
On Fri, Mar 20, 2015 at 1:34 PM, Jay Pipes jaypi...@gmail.com wrote:
 How is 0.0.0.1 a host address? That isn't a valid IP address, AFAIK.

It isn't a valid *IP* address without the network part.  However, it
can be referred to as the host address on the network or the host
part of the IP address.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Add hardware pollster of memory buffer and cache

2015-03-20 Thread gordon chung
this seems reasonable... this might fall into same category ironic generated 
metrics (and any other source that has their own defined list of metrics beyond 
ceilometer's list). there was discussion on how to properly handle these cases 
[1][2]

[1] 
http://eavesdrop.openstack.org/meetings/ceilometer/2015/ceilometer.2015-03-05-15.01.log.html
[2] https://review.openstack.org/#/c/130359/

cheers,
gord


From: lgy...@foxmail.com
To: openstack-dev@lists.openstack.org
Date: Thu, 19 Mar 2015 16:44:20 +0800
Subject: [openstack-dev] [Ceilometer]Add hardware pollster of memory buffer 
and cache

Hello everyone,
I am using Ceilometer to monitor both physical servers and virtutal machines in 
IAAS.And I found current Ceilometer only support 4 memory oid of SNMP:
_memory_total_oid = 1.3.6.1.4.1.2021.4.5.0_memory_avail_real_oid = 
1.3.6.1.4.1.2021.4.6.0_memory_total_swap_oid = 1.3.6.1.4.1.2021.4.3.0   
 _memory_avail_swap_oid = 1.3.6.1.4.1.2021.4.4.0‍
But in practice, memory cache and buffer are also very useful infomation.
So I'd like to add two hardware pollster, MemoryCachePollster and 
MemoryBufferPollster.‍‍
And I want to know is there anyone else insterested in it and should I submit 
blueprint on launchpad?
Thanks.--Luo gangyiluogan...@chinamobile.com 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >