Re: [openstack-dev] [nova][cinder][ceilometer][glance][all] Loading clients from a CONF object

2014-06-15 Thread Yuriy Taraday
On Fri, Jun 13, 2014 at 3:27 AM, Jamie Lennox jamielen...@redhat.com
wrote:

   And as we're going to have to live with this for a while, I'd rather use
  the more clear version of this in keystone instead of the Heat stanzas.

 Anyone else have an opinion on this?


I like keeping sections' names simple and clear, but it looks like you
should add some common section ([services_common]?) since 6 out of 6
options in your example will very probable be repeated for every client.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-15 Thread Christopher Yeoh
On Sat, 14 Jun 2014 08:40:33 +1000
Michael Still mi...@stillhq.com wrote:

 Greetings,
 
 I would like to nominate Ken'ichi Ohmichi for the nova-core team.
 
 Ken'ichi has been involved with nova for a long time now.  His reviews
 on API changes are excellent, and he's been part of the team that has
 driven the new API work we've seen in recent cycles forward. Ken'ichi
 has also been reviewing other parts of the code base, and I think his
 reviews are detailed and helpful.
 
 Please respond with +1s or any concerns.

+1



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] How to avoid property revalidation?

2014-06-15 Thread Steven Hardy
Hi all,

So, I stumbled accross an issue while fixing up some tests, which is that
AFAICS since Icehouse we continually revalidate every property every time
they are accessed:

https://github.com/openstack/heat/blob/stable/havana/heat/engine/properties.py#L716

This means that, for example, we revalidate every property every time an
event is created:

https://github.com/openstack/heat/blob/stable/havana/heat/engine/event.py#L44

And obviously also every time the property is accessed in the code
implementing whatever action we're handling, and potentially also before
the action (e.g the explicit validate before create/update).

This repeated revalidation seems like it could get very expensive - for
example there are several resources (Instance/Server resources in
particular) which validate against glance via a custom constraint, so we're
probably doing at least 6 calls to glance validating the image every
create.  My suspicion is this is one of the reasons for the performance
regression observed in bug #1324102.

I've been experimenting with some code which implements local caching of
the validated properties, but according to the tests this introduces some
problems where the cached value doesn't always match what is expected,
still investigating why but I guess it's updates where we need to
re-resolve what is cached during the update.

Does anyone (and in particular Zane and Thomas who I know have deep
experience in this area) have any ideas on what strategy we might employ to
reduce this revalidation overhead?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Symlinks to new stuff for OpenStack Patching

2014-06-15 Thread Igor Kalnitsky
Hello fuelers,

I'm working on openstack patching for 5.1 and I've met some problems.
The problems I've met are in repos/puppets installing process.

The problems are almost same, so I describe it on repos example.

The repos data are located in /var/www/nailgun. This folder is mounted
as /repo into Nginx container. Nginx container has own /var/www/nailgun
with various symlinks to /repo's content.

So the problem is that we need to add symlinks to newest repos in Nginx
container. How this problem should be solved? Should our fuel-upgrade
script add these symlinks or we'll ship new docker containers which
already contain these symlinks?


Thanks,
Igor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-15 Thread Evgeny Fedoruk
Hi All,

The document was updated and ready for next review round.
Main things that were changed:
1. Comments were addressed
2. No back-end re-encryption supported
3. Intermediate certificates chain supported
*Opened question: Should chain be stored in same TLS container of the 
certificate?

Please review
Regards,
Evgeny


-Original Message-
From: Douglas Mendizabal [mailto:douglas.mendiza...@rackspace.com] 
Sent: Wednesday, June 11, 2014 10:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

Hi Doug,


Barbican does guarantee the integrity and availability of the secret, unless 
the owner of the secret deletes it from Barbican.  We’re not encouraging that 
you store a shadow-copy of the secret either.  This was proposed by the LBaaS 
team as a possible workaround for your use case.
Our recommendation was that there are two options for dealing with Secrets 
being deleted from under you:

If you want to control the lifecycle of the secret so that you can prevent the 
user from deleting the secret, then the secret should be owned by LBaaS, not by 
the user.  You can achieve this by asking the user to upload the secret via 
LBaaS api, and then use Barbican on the back end to store the secret under the 
LBaaS tenant.

If you want the user to own and manage their secret in Barbican, then you have 
to deal with the situation where the user deletes a secret and it is no longer 
available to LBaaS.  This is a situation you would have to deal with even with 
a reference-counting and force-deleting Barbican, so I don’t think you really 
gain anything from all the complexity you’re proposing to add to Barbican.

-Douglas M.



On 6/11/14, 12:57 PM, Doug Wiegley do...@a10networks.com wrote:

There are other fundamental things about secrets, like relying on their 
presence, and not encouraging a proliferation of a dozen 
mini-secret-stores everywhere to get around that fact, which makes it 
less secret.  Have you considered a ³force² delete flag, required if 
some service is using the secret, sort of ³rm² vs ³rm -f², to avoid the 
obvious foot-shooting use cases, but still allowing the user to nuke it 
if necessary?

Thanks,
Doug


On 6/11/14, 11:43 AM, Clark, Robert Graham robert.cl...@hp.com wrote:

Users have to be able to delete their secrets from Barbican, it's a 
fundamental key-management requirement.

 -Original Message-
 From: Eichberger, German
 Sent: 11 June 2014 17:43
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST 
 document on Gerrit
 
 Sorry, I am late to the party. Holding the shadow copy in the 
 backend
is a
 fine solution.
 
 Also, if containers are immutable can they be deleted at all? Can we
make a
 requirement that a user can't delete a container in Barbican?
 
 German
 
 -Original Message-
 From: Eichberger, German
 Sent: Wednesday, June 11, 2014 9:32 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST 
 document on Gerrit
 
 Hi,
 
 I think the previous solution is easier for a user to understand. 
 The referenced container got tampered/deleted we throw an error - 
 but keep existing load balancers intact.
 
 With the shadow container we get additional complexity and the user
might
 be confused where the values are coming from.
 
 German
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Tuesday, June 10, 2014 12:18 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS support RST 
 document on Gerrit
 
 See adams message re: Re: [openstack-dev] [Neutron][LBaaS] Barbican 
 Neutron LBaaS Integration Ideas.
 He's advocating keeping a shadow copy of the private key that is 
 owned
by
 the LBaaS service so that incase a key is tampered with during an LB
update
 migration etc we can still check with the shadow backup and compare 
 it
to
 the user owned TLS container in case its not their it can be used.
 
 On Jun 10, 2014, at 12:47 PM, Samuel Bercovici samu...@radware.com
  wrote:
 
  To elaborate on the case where containers get deleted while LBaaS
still
 references it.
  We think that the following approach will do:
  * The end user can delete a container and leave a dangling
 reference in LBaaS.
  * It would be nice to allow adding meta data on the
container so that
 the user will be aware which listeners use this container. This is
optional. It
 can also be optional for LBaaS to implement adding the listeners ID 
 automatically into this metadata just for information.
  * In LBaaS, if an update happens which requires to pull the
container
 from Barbican and if the ID references a non-existing container, the
update
 will fail and will indicate that the reference certificate does 

Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-15 Thread Alex Xu

+1

On 2014年06月14日 06:40, Michael Still wrote:

Greetings,

I would like to nominate Ken'ichi Ohmichi for the nova-core team.

Ken'ichi has been involved with nova for a long time now.  His reviews
on API changes are excellent, and he's been part of the team that has
driven the new API work we've seen in recent cycles forward. Ken'ichi
has also been reviewing other parts of the code base, and I think his
reviews are detailed and helpful.

Please respond with +1s or any concerns.

References:

   
https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z

   https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z

   http://www.stackalytics.com/?module=nova-groupuser_id=oomichi

As a reminder, we use the voting process outlined at
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
core team.

Thanks,
Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [FWaaS] [sequritygroup] [Development]

2014-06-15 Thread Israel Ziv
Hi!
Please let me know if I've reached the proper group.
I am going through neutron's code and have a few questions.


1.   I understood that

a.   'securitygroups' enables intra-subnet firewall and is aimed to 
allow/deny traffic between tenants.

b.  'FWaaS' enables inter-subnet firewall and is aimed to allow/deny 
traffic within tenant.

c.   Did I understand correctly?

2.   Does a securitygroup rule generation have effect on the perimeter 
firewall of the cloud?

Regards
Israel Ziv
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints

2014-06-15 Thread Gary Kotton


On 6/14/14, 1:05 AM, Anita Kuno ante...@anteaya.info wrote:

On 06/13/2014 05:58 PM, Carlos Gonçalves wrote:
 Let me add to what I've said in my previous email, that Instituto de
Telecomunicacoes and Portugal Telecom are also available to host and
organize a mid cycle sprint in Lisbon, Portugal.
 
 Please let me know who may be interested in participating.
 
 Thanks,
 Carlos Goncalves
 
 On 13 Jun 2014, at 10:45, Carlos Gonçalves m...@cgoncalves.pt wrote:
 
 Hi,

 I like the idea of arranging a mid cycle for Neutron in Europe
somewhere in July. I was also considering inviting folks from the
OpenStack NFV team to meet up for a F2F kick-off.

 I did not know about the sprint being hosted and organised by eNovance
in Paris until just now. I think it is a great initiative from eNovance
even because it¹s not being focused on a specific OpenStack project.
So, I'm interested in participating in this sprint for discussing
Neutron and NFV. Two more people from Instituto de Telecomunicacoes and
Portugal Telecom have shown interested too.

 Neutron and NFV team members, who¹s interested in meeting in Paris, or
if not available on the date set by eNovance in other time and place?

 Thanks,
 Carlos Goncalves

 On 13 Jun 2014, at 08:42, Sylvain Bauza sba...@redhat.com wrote:

 Le 12/06/2014 15:32, Gary Kotton a écrit :
 Hi,
 There is the mid cycle sprint in July for Nova and Neutron. Anyone
interested in maybe getting one together in Europe/Middle East around
the same dates? If people are willing to come to this part of the
world I am sure that we can organize a venue for a few days. Anyone
interested. If we can get a quorum then I will be happy to try and
arrange things.
 Thanks
 Gary



 Hi Gary,

 Wouldn't it be more interesting to have a mid-cycle sprint *before*
the Nova one (which is targeted after juno-2) so that we could discuss
on some topics and make a status to other folks so that it would allow
a second run ?

 There is already a proposal in Paris for hosting some OpenStack
sprints, see https://wiki.openstack.org/wiki/Sprints/ParisJuno2014

 -Sylvain




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Neutron already has two sprints scheduled:
https://wiki.openstack.org/wiki/Sprints

Those sprints are both in the US. It is a very long way to travel. If
there are a group of people that can get together in Europe then it would
be great.


Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints

2014-06-15 Thread Dmitry
+1 for Paris/Lisbon

On Sun, Jun 15, 2014 at 4:27 PM, Gary Kotton gkot...@vmware.com wrote:


 On 6/14/14, 1:05 AM, Anita Kuno ante...@anteaya.info wrote:

On 06/13/2014 05:58 PM, Carlos Gonçalves wrote:
 Let me add to what I've said in my previous email, that Instituto de
Telecomunicacoes and Portugal Telecom are also available to host and
organize a mid cycle sprint in Lisbon, Portugal.

 Please let me know who may be interested in participating.

 Thanks,
 Carlos Goncalves

 On 13 Jun 2014, at 10:45, Carlos Gonçalves m...@cgoncalves.pt wrote:

 Hi,

 I like the idea of arranging a mid cycle for Neutron in Europe
somewhere in July. I was also considering inviting folks from the
OpenStack NFV team to meet up for a F2F kick-off.

 I did not know about the sprint being hosted and organised by eNovance
in Paris until just now. I think it is a great initiative from eNovance
even because it¹s not being focused on a specific OpenStack project.
So, I'm interested in participating in this sprint for discussing
Neutron and NFV. Two more people from Instituto de Telecomunicacoes and
Portugal Telecom have shown interested too.

 Neutron and NFV team members, who¹s interested in meeting in Paris, or
if not available on the date set by eNovance in other time and place?

 Thanks,
 Carlos Goncalves

 On 13 Jun 2014, at 08:42, Sylvain Bauza sba...@redhat.com wrote:

 Le 12/06/2014 15:32, Gary Kotton a écrit :
 Hi,
 There is the mid cycle sprint in July for Nova and Neutron. Anyone
interested in maybe getting one together in Europe/Middle East around
the same dates? If people are willing to come to this part of the
world I am sure that we can organize a venue for a few days. Anyone
interested. If we can get a quorum then I will be happy to try and
arrange things.
 Thanks
 Gary



 Hi Gary,

 Wouldn't it be more interesting to have a mid-cycle sprint *before*
the Nova one (which is targeted after juno-2) so that we could discuss
on some topics and make a status to other folks so that it would allow
a second run ?

 There is already a proposal in Paris for hosting some OpenStack
sprints, see https://wiki.openstack.org/wiki/Sprints/ParisJuno2014

 -Sylvain




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Neutron already has two sprints scheduled:
https://wiki.openstack.org/wiki/Sprints

 Those sprints are both in the US. It is a very long way to travel. If
 there are a group of people that can get together in Europe then it would
 be great.


Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing

2014-06-15 Thread Monty Taylor
On 06/14/2014 09:39 PM, Sukhdev Kapur wrote:
 Oppss...sorry wrong link... please use this
 http://paste.openstack.org/show/84073/.
 
 
 If anybody needs help, please ping me or go to #openstack-infra.
 

The relevant patch has been merged into upstream setuptools and a new
setuptools release, 5.0.2 has been cut.

 
 On Sat, Jun 14, 2014 at 9:34 PM, Sukhdev Kapur sukhdevka...@gmail.com
 wrote:
 
 Fellow Stackers,

 I have an update on the issue.
 Kudos to the Infra folks, a huge thanks to Monty for coming up with patch
 for this setuptools issue, and Anita for for being on top of this. Please
 follow the steps in http://paste.openstack.org/show/84076/ to pull this
 patch on your local systems to get past the issue - until the fix in the
 upstream is merged.

 Note that you have to install mercurial to pull this patch.

 Hope this helps.

 regards..
 -Sukhdev




 On Sat, Jun 14, 2014 at 5:45 PM, Sukhdev Kapur sukhdevka...@gmail.com
 wrote:

 I noticed this afternoon (Saturday PST 1:18pm) that most of the Third
 Party test systems started to fail because of the seuptools bug because of
 dependency in python-swiftclient. I further noticed that some of the CI's
 are voting +1, but, when I look through the logs, they seem to be hitting
 this issue as well.

 I have been on #openstack-infra most of the afternoon discussing various
 options suggested by folks. Infra folks have confirmed this issue and are
 looking for solution.  I tried fixes suggested in [1] and [2] below and
 removed the setuptools and reinstalled version 3.8. This did not help. I
 have opened the bug[3] to track this issue.

 I thought I send out this message in case other CI maintainers are
 investigating this issue.

 Please share ideas/thoughts so that we can get the CIs fixed as soon as
 possible.

 Thanks
 -Sukhdev


 [1] https://bugs.launchpad.net/python-swiftclient/+bug/1326972
 [2] https://mail.python.org/pipermail/distutils-sig/2014-June/024478.html
 [3] https://bugs.launchpad.net/python-swiftclient/+bug/1330140



 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] VMware ESX Driver Deprecation

2014-06-15 Thread Gary Kotton
Hi,
In the Icehouse cycle it was decided to deprecate the VMware ESX driver. The 
motivation for the decision was:

  *   The driver is not validated by Minesweeper
  *   It is not clear if there are actually any users of the driver

Prior to jumping into the proposal we should take into account that the current 
ESX driver does not work with the following branches:

  *   Master (Juno)
  *   Icehouse
  *   Havana

The above are due to VC features that were added over the course of these 
cycles.

On the VC side the ESX can be added to a cluster and the running VM's will 
continue to run. The problem is how that are tracked and maintained in the Nova 
DB.

Option 1: Moving the ESX(s) into a nova managed cluster. This would require the 
nova DB entry for the instance running on the ESX to be updated to be running 
on the VC host. If the VC host restarts at point during the above then all of 
the running instances may be deleted (this is due to the fact that 
_destroy_evacuated_instances is invoked when a nova compute is started 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L673). 
This would be disastrous for a running deployment.

If we do decide to go for the above option we can perform a cold migration of 
the instances from the ESX hosts to the VC hosts. The fact that the same 
instance will be running on the ESX would require us to have a 'noop' for the 
migration. This can be done by configuration variables but that will be messy. 
This option would require code changes.

Option 2: Provide the administrator with tools that will enable a migration of 
the running VM's.

  1.  A script that will import OpenStack VM's into the database - the script 
will detect VM's running on a VC and import them to the database.
  2.  A scrip that will delete VM's running on a specific host

The admin will use these as follows:

  1.  Invoke the deletion script for the ESX
  2.  Add the ESX to a VC
  3.  Invoke the script for importing the OpenStack VM's into the database
  4.  Start the nova compute with the VC driver
  5.  Terminate all Nova computes with the ESX driver

This option requires the addition of the scripts. The advantage is that it does 
not touch any of the running code and is done out of band. A variant of option 
2 would be to have a script that updates the host for the ESX VM's to the VC 
host.

Due to the fact that the code is not being run at the moment I am in favor of 
the external scripts as it will be less disruptive and not be on a critical 
path. Any thoughts or comments?

Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [FWaaS] [sequritygroup] [Development]

2014-06-15 Thread Salvatore Orlando
Hi Israel,

please find my answers inline.
I'm not really an expert in this area, but I hope these answers are
helpful, and, hopefully, correct!

Salvatore


On 15 June 2014 14:55, Israel Ziv israel@huawei.com wrote:

  Hi!

 Please let me know if I’ve reached the proper group.

 I am going through neutron’s code and have a few questions.



 1.   I understood that

 a.   ‘securitygroups’ enables intra-subnet “firewall” and is aimed to
 allow/deny traffic between tenants.

This is kind of correct. However, rather than intra-subnet I would say
that the firewall rules are enforced at the port level - and they're
obviously not just for allowing or deny traffic among tenants, as they
allow to express a wide variety of rules.
Another thing to note is that security group rules' action always is ALLOW
- and they're enforced on a baseline default DENY ALL policy

  b.  ‘FWaaS’ enables inter-subnet “firewall” and is aimed to
 allow/deny traffic within tenant.

This is correct too, but as before I would point out that the real
difference is that these rules are enforced at the router level. Also the
nature of the rule is different as the associated actions can be either
ALLOW or DENY.

  c.   Did I understand correctly?

 2.   Does a securitygroup rule generation have effect on the
 perimeter firewall of the cloud?

 If by perimeter you mean the 'edge' of cloud, ie: where your router's
gateway ports are plugged, then I would say no. However, I don't remember
whether security group rules are enforced on external networks as well; and
also I'm not sure security groups are the right abstraction in that case.




 Regards

 Israel Ziv

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] How to avoid property revalidation?

2014-06-15 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-06-15 02:40:14 -0700:
 Hi all,
 
 So, I stumbled accross an issue while fixing up some tests, which is that
 AFAICS since Icehouse we continually revalidate every property every time
 they are accessed:
 
 https://github.com/openstack/heat/blob/stable/havana/heat/engine/properties.py#L716
 
 This means that, for example, we revalidate every property every time an
 event is created:
 
 https://github.com/openstack/heat/blob/stable/havana/heat/engine/event.py#L44
 
 And obviously also every time the property is accessed in the code
 implementing whatever action we're handling, and potentially also before
 the action (e.g the explicit validate before create/update).
 
 This repeated revalidation seems like it could get very expensive - for
 example there are several resources (Instance/Server resources in
 particular) which validate against glance via a custom constraint, so we're
 probably doing at least 6 calls to glance validating the image every
 create.  My suspicion is this is one of the reasons for the performance
 regression observed in bug #1324102.
 
 I've been experimenting with some code which implements local caching of
 the validated properties, but according to the tests this introduces some
 problems where the cached value doesn't always match what is expected,
 still investigating why but I guess it's updates where we need to
 re-resolve what is cached during the update.
 
 Does anyone (and in particular Zane and Thomas who I know have deep
 experience in this area) have any ideas on what strategy we might employ to
 reduce this revalidation overhead?

tl;dr: I think we should only validate structure in validate, and leave
runtime validation to preview.

I've been wondering about what we want to achieve with validation
recently. It seems to me that the goal is to assist template authors
in finding obvious issues in structure and content before they cause a
runtime failure. But the error messages are so unhelpful we basically
get this:

http://cdn.memegenerator.net/instances/500x/50964597.jpg

What holds us back from improving that is the complexity of doing
runtime validation.

To me, runtime is more of a 'preview' problem than a validate problem. A
template that validates once should continue to validate on any version
that supports the template format. But a preview will actually want to
measure runtime things and use parameters, and thus is where runtime
concerns belong.

I wonder if we could move validation out of any runtime context, and
remove any attempts to validate runtime things like image names/ids and
such. That would allow us to remove any but pre-action validation calls.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-15 Thread Clint Byrum
Excerpts from Matthew Booth's message of 2014-06-13 01:40:30 -0700:
 On 12/06/14 21:38, Joshua Harlow wrote:
  So just a few thoughts before going to far down this path,
  
  Can we make sure we really really understand the use-case where we think
  this is needed. I think it's fine that this use-case exists, but I just
  want to make it very clear to others why its needed and why distributing
  locking is the only *correct* way.
 
 An example use of this would be side-loading an image from another
 node's image cache rather than fetching it from glance, which would have
 very significant performance benefits in the VMware driver, and possibly
 other places. The copier must take a read lock on the image to prevent
 the owner from ageing it during the copy. Holding a read lock would also
 assure the copier that the image it is copying is complete.

Really? Usually in the unix-inspired world we just open a file and it
stays around until we close it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Starting contributing to project

2014-06-15 Thread Sławek Kapłoński
Hello,

I want to start contributing to neutron project. I found bug which I
want to try fix: https://bugs.launchpad.net/neutron/+bug/1204956 and I
have question about workflow in such case. Should I clone neutron
reposiotory from branch master and do changes based on master branch or
maybe should I do my changes starting from any other branch? What
should I do next when I will for example do patch for such bug?
Thanks in advance for any help and explanation about that

-- 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Starting contributing to project

2014-06-15 Thread Clint Byrum
Excerpts from Sławek Kapłoński's message of 2014-06-15 13:10:56 -0700:
 Hello,
 
 I want to start contributing to neutron project. I found bug which I
 want to try fix: https://bugs.launchpad.net/neutron/+bug/1204956 and I
 have question about workflow in such case. Should I clone neutron
 reposiotory from branch master and do changes based on master branch or
 maybe should I do my changes starting from any other branch? What
 should I do next when I will for example do patch for such bug?
 Thanks in advance for any help and explanation about that
 

This should explain everything you need to know:

https://wiki.openstack.org/wiki/Gerrit_Workflow

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-15 Thread Salvatore Orlando
Regarding the two approaches outlines in the top post, I found out that the
bullet This is API versioning done the wrong way appears in both
approaches.
Is this a mistake or intentional?

From what I gather, the most reasonable approach appears to be starting
with a clean slate, which means having a new API living side by side with
the old one.
I think the naming collision issues should probably be solved using
distinct namespaces for the two API (the old one has /v2/lbaas as a URI
prefix I think, I have hardly any idea about what namespace the new one
should have)

Finally, about deprecation - I see it's been agreed to deprecate the
current API in Juno.
I think this is not the right way of doing things. The limits of the
current API are pretty much universally agreed; on the other hand, it is
generally not advisable to deprecate an old API in favour of the new one at
the first iteration such API is published. My preferred strategy would be
to introduce the new API as experimental in the Juno release, so that in
can be evaluated, apply any feedback and consider for promoting in K - and
contextually deprecate the old API.

As there is quite a radical change between the old and the new model,
keeping the old API indefinitely is a maintenance burden we probably can't
afford, and I would therefore propose complete removal one release cycle
after deprecation. Also - since it seems to me that there is also consensus
regarding having load balancing move away into a separate project so that
it would not be tied anymore to the networking program, the old API is
pretty much just dead weight.

Salvatore


On 11 June 2014 18:01, Kyle Mestery mest...@noironetworks.com wrote:

 I spoke to Mark McClain about this yesterday, I'll see if I can get
 him to join the LBaaS team meeting tomorrow so between he and I we can
 close on this with the LBaaS team.

 On Wed, Jun 11, 2014 at 10:57 AM, Susanne Balle sleipnir...@gmail.com
 wrote:
  Do we know who has an opinion? If so maybe we can reach out to them
 directly
  and ask them to comment.
 
 
  On Tue, Jun 10, 2014 at 6:44 PM, Brandon Logan 
 brandon.lo...@rackspace.com
  wrote:
 
  Well we got a few opinions, but not enough understanding of the two
  options to make an informed decision.  It was requested that the core
  reviewers respond to this thread with their opinions.
 
  Thanks,
  Brandon
 
  On Tue, 2014-06-10 at 13:22 -0700, Stephen Balukoff wrote:
   Yep, I'd like to know here, too--  as knowing the answer to this
   unblocks implementation work for us.
  
  
   On Tue, Jun 10, 2014 at 12:38 PM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
   Any core neutron people have a chance to give their opinions
   on this
   yet?
  
   Thanks,
   Brandon
  
   On Thu, 2014-06-05 at 15:28 +, Buraschi, Andres wrote:
Thanks, Kyle. Great.
   
-Original Message-
From: Kyle Mestery [mailto:mest...@noironetworks.com]
Sent: Thursday, June 05, 2014 11:27 AM
To: OpenStack Development Mailing List (not for usage
   questions)
Subject: Re: [openstack-dev] [Neutron] Implementing new
   LBaaS API
   
On Wed, Jun 4, 2014 at 4:27 PM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
 Hi Andres,
 I've assumed (and we know how assumptions work) that the
   deprecation
 would take place in Juno and after a cyle or two it would
   totally be
 removed from the code.  Even if #1 is the way to go, the
   old /vips
 resource would be deprecated in favor of /loadbalancers
   and /listeners.

 I agree #2 is cleaner, but I don't want to start on an
   implementation
 (though I kind of already have) that will fail to be
   merged in because
 of the strategy.  The strategies are pretty different so
   one needs to
 be decided on.

 As for where LBaaS is intended to end up, I don't want to
   speak for
 Kyle, so this is my understanding; It will end up outside
   of the
 Neutron code base but Neutron and LBaaS and other services
   will all
 fall under a Networking (or Network) program.  That is my
 understanding and I could be totally wrong.

That's my understanding as well, I think Brandon worded it
   perfectly.
   
 Thanks,
 Brandon

 On Wed, 2014-06-04 at 20:30 +, Buraschi, Andres wrote:
 Hi Brandon, hi Kyle!
 I'm a bit confused about the deprecation (btw, thanks for
   sending this Brandon!), as I (wrongly) assumed #1 would be the
   chosen path for the new API implementation. I understand the
   

Re: [openstack-dev] [Neutron] REST API - entity level validation

2014-06-15 Thread Salvatore Orlando
Avishay,

what you say here is correct.
However, as we are in the process of moving to Pecan as REST API framework
I would probably refrain from adding new features to it at this stage.

Therefore, even if far from ideal, this kind of validation should perhaps
be performed in the DB layer. I think this already happens for several API
resources.

Salvatore


On 5 June 2014 13:01, Avishay Balderman avish...@radware.com wrote:

   Hi

 With the current REST API engine in neutron we can declare attributes
 validations.

 We have a rich set of validation functions
 https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py

 However we do not have the concept of entity level validation.



 Example:

 I have an API ‘create-something’ and Something is an entity having 2
 attributes:

 Something {

   Attribute A

  Attribute B

 }

 And according to the business logic A must be greater than B





 As for today our framework cannot handle  this kind of validation and the
 call is going inside a lower layer of neutron and must be validated there.

 Example: https://review.openstack.org/#/c/93871/9



 With this we have the validations implemented across multi layers. I think
 we better have the validations in one layer.



 Thanks



 Avishay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] How to avoid property revalidation?

2014-06-15 Thread Steve Baker
On 16/06/14 06:26, Clint Byrum wrote:
 Excerpts from Steven Hardy's message of 2014-06-15 02:40:14 -0700:
 Hi all,

 So, I stumbled accross an issue while fixing up some tests, which is that
 AFAICS since Icehouse we continually revalidate every property every time
 they are accessed:

 https://github.com/openstack/heat/blob/stable/havana/heat/engine/properties.py#L716

 This means that, for example, we revalidate every property every time an
 event is created:

 https://github.com/openstack/heat/blob/stable/havana/heat/engine/event.py#L44

 And obviously also every time the property is accessed in the code
 implementing whatever action we're handling, and potentially also before
 the action (e.g the explicit validate before create/update).

 This repeated revalidation seems like it could get very expensive - for
 example there are several resources (Instance/Server resources in
 particular) which validate against glance via a custom constraint, so we're
 probably doing at least 6 calls to glance validating the image every
 create.  My suspicion is this is one of the reasons for the performance
 regression observed in bug #1324102.

 I've been experimenting with some code which implements local caching of
 the validated properties, but according to the tests this introduces some
 problems where the cached value doesn't always match what is expected,
 still investigating why but I guess it's updates where we need to
 re-resolve what is cached during the update.

 Does anyone (and in particular Zane and Thomas who I know have deep
 experience in this area) have any ideas on what strategy we might employ to
 reduce this revalidation overhead?
 tl;dr: I think we should only validate structure in validate, and leave
 runtime validation to preview.

 I've been wondering about what we want to achieve with validation
 recently. It seems to me that the goal is to assist template authors
 in finding obvious issues in structure and content before they cause a
 runtime failure. But the error messages are so unhelpful we basically
 get this:

 http://cdn.memegenerator.net/instances/500x/50964597.jpg

 What holds us back from improving that is the complexity of doing
 runtime validation.

 To me, runtime is more of a 'preview' problem than a validate problem. A
 template that validates once should continue to validate on any version
 that supports the template format. But a preview will actually want to
 measure runtime things and use parameters, and thus is where runtime
 concerns belong.

 I wonder if we could move validation out of any runtime context, and
 remove any attempts to validate runtime things like image names/ids and
 such. That would allow us to remove any but pre-action validation calls.

Agreed, and parameter validation needs to be included in this too -
especially with custom constraints which can make API calls.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-15 Thread Brandon Logan
Thank you Salvatore for your feedback.

Comments in-line.

On Sun, 2014-06-15 at 23:26 +0200, Salvatore Orlando wrote:
 Regarding the two approaches outlines in the top post, I found out
 that the bullet This is API versioning done the wrong way appears in
 both approaches.
 Is this a mistake or intentional?

No it was intentional.  In my opinion they are both the wrong way.  It
would be best to be able to do a version at the resource layer but we
can't since lbaas is a part of Neutron and its versions is directly tied
to Neutron's.  Another possibility is to have the resource look like:

http(s)://neutron.endpoint/v2/lbaas/v2

This looks very odd to me though and sets a bad precedent.  That is just
my opinion though.  So I wouldn't call this the right way either.  Thus,
I do not know of a right way to do this other than choosing the right
alternative way.

 
 
 From what I gather, the most reasonable approach appears to be
 starting with a clean slate, which means having a new API living side
 by side with the old one.
 I think the naming collision issues should probably be solved using
 distinct namespaces for the two API (the old one has /v2/lbaas as a
 URI prefix I think, I have hardly any idea about what namespace the
 new one should have)
 

I'm in agreement with you as well. The old one has /v2/lb as the prefix.
I figured the new one could be /v2/lbaas which I think works out well.

Another thing to consider that I did not think about in my original
message is that a whole new load balancing agent will have to be created
as well since its code is written with the pool being the root object.
So that should be taken into consideration.  So to be perfectly clear,
starting with a clean slate would involve the following:

1. New loadbalancer extension
2. New loadbalancer plugin
3. New lbaas_agentscheduler extension
4. New agent_scheduler plugin.

Also, I don't believe doing this would allow the two to be deployed at
the same time.  I believe the setup.cfg file would have to be modified
to point to the new plugins.  I could be wrong about that though.

 
 Finally, about deprecation - I see it's been agreed to deprecate the
 current API in Juno.
 I think this is not the right way of doing things. The limits of the
 current API are pretty much universally agreed; on the other hand, it
 is generally not advisable to deprecate an old API in favour of the
 new one at the first iteration such API is published. My preferred
 strategy would be to introduce the new API as experimental in the Juno
 release, so that in can be evaluated, apply any feedback and consider
 for promoting in K - and contextually deprecate the old API.
 
 
 As there is quite a radical change between the old and the new model,
 keeping the old API indefinitely is a maintenance burden we probably
 can't afford, and I would therefore propose complete removal one
 release cycle after deprecation. Also - since it seems to me that
 there is also consensus regarding having load balancing move away into
 a separate project so that it would not be tied anymore to the
 networking program, the old API is pretty much just dead weight.
 
 Salvatore

Good idea on that.  I'll bring this up with everyone at the hackathon
this week if it is not already on the table.

Thanks again for your feedback.

Brandon
 
 
 On 11 June 2014 18:01, Kyle Mestery mest...@noironetworks.com wrote:
 I spoke to Mark McClain about this yesterday, I'll see if I
 can get
 him to join the LBaaS team meeting tomorrow so between he and
 I we can
 close on this with the LBaaS team.
 
 On Wed, Jun 11, 2014 at 10:57 AM, Susanne Balle
 sleipnir...@gmail.com wrote:
  Do we know who has an opinion? If so maybe we can reach out
 to them directly
  and ask them to comment.
 
 
  On Tue, Jun 10, 2014 at 6:44 PM, Brandon Logan
 brandon.lo...@rackspace.com
  wrote:
 
  Well we got a few opinions, but not enough understanding of
 the two
  options to make an informed decision.  It was requested
 that the core
  reviewers respond to this thread with their opinions.
 
  Thanks,
  Brandon
 
  On Tue, 2014-06-10 at 13:22 -0700, Stephen Balukoff wrote:
   Yep, I'd like to know here, too--  as knowing the answer
 to this
   unblocks implementation work for us.
  
  
   On Tue, Jun 10, 2014 at 12:38 PM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
   Any core neutron people have a chance to give
 their opinions
   on this
   yet?
  
   Thanks,
   Brandon
  
   On Thu, 2014-06-05 at 15:28 +, Buraschi,
 Andres wrote:
Thanks, Kyle. Great.
   

Re: [openstack-dev] [Neutron][ml2] Tracking the reviews for ML2 related specs

2014-06-15 Thread Mohammad Banikazemi

Hi Irena,

The R columns are for non-core subgroup reviews and the C columns are
for the core reviewers.
As of now, we only have specs listed/tracked in this wiki. Expanding the
wiki to include the code was briefly discussed last week. One option that
comes to mind is expanding the table for merged specs (the second table in
the wiki page) for tracking the reviews of the code. I have updated that
table and you and others can update the row for your specs and we can
discuss during the ML2 meeting and see if others see this as helpful. I
think considering the limited number of specs/code being tracked this may
be helpful as long as the code owners keep the table up to date.

Best,

Mohammad




From:   Irena Berezovsky ire...@mellanox.com
To: Mohammad Banikazemi/Watson/IBM@IBMUS,
Cc: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/15/2014 12:51 AM
Subject:RE: [openstack-dev] [Neutron][ml2] Tracking the reviews for ML2
related specs



Hi Mohammad,
Thank you for sharing the links.
Can you please elaborate on columns of the table in [1]. Is [R] supposed to
be for spec review and [C] for code review?
If this correct, would it be possible to add [C] columns for already merged
specs that still have the code under review?

Thanks a lot,
Irena

From: Mohammad Banikazemi [mailto:m...@us.ibm.com]
Sent: Friday, June 13, 2014 8:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][ml2] Tracking the reviews for ML2
related specs



In order to make the review process a bit easier (without duplicating too
much data and without creating too much overhead), we have created a wiki
to keep track of the ML2 related specs for the Juno cycle [1]. The idea is
to organize the people who participate in the ML2 subgroup activities and
get the related specs reviewed as much as possible in the subgroup before
asking the broader community to review. (There is of course nothing that
prevents others from reviewing these specs as soon as they are available
for review.) If you have any ML2 related spec under review or being
planned, you may want to update the wiki [1] accordingly.

We will see if this will be useful or not. If you have any comments or
suggestions please post here or bring them to the IRC weekly meetings [2].

Best,

Mohammad

[1] https://wiki.openstack.org/wiki/Tracking_ML2_Subgroup_Reviews
[2] https://wiki.openstack.org/wiki/Meetings/ML2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Reviews - we need your help!

2014-06-15 Thread James Polley
Thanks Tomas and Matthew

I've updated https://wiki.openstack.org/wiki/TripleO#Review_team to have a
link to the new dashboard.


On Fri, Jun 13, 2014 at 5:20 PM, Macdonald-Wallace, Matthew 
matthew.macdonald-wall...@hp.com wrote:

 Thanks Tomas,

 http://bit.ly/1lsg3SH now contains the missing projects and has been
 re-ordered slightly so that you see outdated reviews first then the
 Jenkins/-1 stuff.

 Cheers,

 Matt

  -Original Message-
  From: Tomas Sedovic [mailto:tsedo...@redhat.com]
  Sent: 12 June 2014 19:03
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [TripleO] Reviews - we need your help!
 
  On 12/06/14 16:02, Macdonald-Wallace, Matthew wrote:
   FWIW, I've tried to make a useful dashboard for this using Sean
   Dague's gerrit-dash-creator [0].
  
  
  
   Short URL is http://bit.ly/1l4DLFS long url is:
  
  
  
  
  https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack
  %2Ftripleo-incubator+OR+project%3Aopenstack%2Ftripleo-image-
  elements+OR+project%3Aopenstack%2Ftripleo-heat-
  templates+OR+project%3Aopenstack%2Ftripleo-
  specs+OR+project%3Aopenstack%2Fos-apply-
  config+OR+project%3Aopenstack%2Fos-collect-
  config+OR+project%3Aopenstack%2Fos-refresh-
  config+OR+project%3Aopenstack%2Fdiskimage-
  builder%29+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-
  1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-
  Review%3E%3D-
  2%252cself+branch%3Amaster+status%3Aopentitle=TripleO+ReviewsYour+a
  re+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%
  3AselfPassed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-
  Review%3C%3D-
  1+limit%3A100Changes+with+no+code+review+in+the+last+48hrs=NOT+label
  %3ACode-
  Review%3C%3D2+age%3A48hChanges+with+no+code+review+in+the+last+5+
  days=NOT+label%3ACode-
  Review%3C%3D2+age%3A5dChanges+with+no+code+review+in+the+last+7+d
  ays=NOT+label%3ACode-Review%!
   3C%3D2+age
  %3A7dSome+adjustment+required+%28-1+only%29=label%3ACode-
  Review%3D-1+NOT+label%3ACode-Review%3D-
  2+limit%3A100Dead+Specs+%28-2%29=label%3ACode-Review%3C%3D-2
  
  
  
   I'll add it to my fork and submit a PR if people think it useful.
 
  I was about to mention this, too. The gerrit-dash-creator is fantastic.
 
  This one is missing the Tuskar-related projects (openstack/tuskar,
  openstack/tuskar-ui and openstack/python-tuskarclient) and also
 openstack/os-
  cloud-config, though.
 
 
  
  
  
   Matt
  
  
  
   [0] https://github.com/sdague/gerrit-dash-creator
  
  
  
   *From:*James Polley [mailto:j...@jamezpolley.com]
   *Sent:* 12 June 2014 06:08
   *To:* OpenStack Development Mailing List (not for usage questions)
   *Subject:* [openstack-dev] [TripleO] Reviews - we need your help!
  
  
  
   During yesterday's IRC meeting, we realized that our review stats are
   starting to slip again.
  
   Just after summit, our stats were starting to improve. In the
   2014-05-20 meeting, the TripleO  Stats since the last revision
   without -1 or -2[1] looked like this:
  
   1rd quartile wait time: 1 days, 1 hours, 11 minutes
  
   Median wait time: 6 days, 9 hours, 49 minutes
  
   3rd quartile wait time: 13 days, 5 hours, 46 minutes
  
  
  
   As of yesterdays meeting, we have:
  
   1rd quartile wait time: 4 days, 23 hours, 19 minutes
  
   Median wait time: 7 days, 22 hours, 8 minutes
  
   3rd quartile wait time: 13 days, 19 hours, 17 minutes
  
  
  
   This really hurts our velocity, and is especially hard on people
   making their first commit, as it can take them almost a full work week
   before they even get their first feedback.
  
   To get things moving, we need everyone to make a special effort to do
   a few reviews every day. It would be most helpful if you can look for
   older reviews without a -1 or -2 and help those reviews get over the
 line.
  
   If you find reviews that are just waiting for a simple fix - typo or
   syntax fixes, simple code fixes, or a simple rebase - it would be even
   more helpful if you could take a few minutes to make those patches,
   rather than just leaving the review waiting for the attention of the
   original submitter.
  
   Please keep in mind that these stats are based on all of our projects,
   not just tripleo-incubator. To save you heading to the wiki, here's a
   handy link that shows you all open code reviews in all our projects:
  
   bit.ly/1hQco1N http://bit.ly/1hQco1N
  
   If you'd prefer the long version:
   https://review.openstack.org/#/q/status:open+%28project:openstack/trip
   leo-incubator+OR+project:openstack/tuskar+OR+project:openstack/tuskar-
   ui+OR+project:openstack-infra/tripleo-ci+OR+project:openstack/os-apply
   -config+OR+project:openstack/os-collect-config+OR+project:openstack/os
   -refresh-config+OR+project:openstack/os-cloud-config+OR+project:openst
   ack/tripleo-image-elements+OR+project:openstack/tripleo-heat-templates
   +OR+project:openstack/diskimage-builder+OR+project:openstack/python-tu
   

[openstack-dev] [Fuel] Bug squashing day on June, 17

2014-06-15 Thread Mike Scherbakov
Fuelers,
as we discussed during last IRC meeting
http://eavesdrop.openstack.org/meetings/fuel/2014/fuel.2014-06-12-16.01.html,
I'm scheduling bug squashing day on Tuesday, June 17th.

I'd like to propose the following order of bugs processing:

   1. Confirm / triage bugs in New status
   
https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=-importancefield.status%3Alist=NEWassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onsearch=Search,
   assigning them to yourself to avoid the situation when a few people work on
   same bug
   2. Review bugs in Incomplete status
   
https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=-importancefield.status%3Alist=INCOMPLETE_WITH_RESPONSEfield.status%3Alist=INCOMPLETE_WITHOUT_RESPONSEassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=onsearch=Search,
   move them to Confirmed / Triaged or close as Invalid.
   3. Follow https://wiki.openstack.org/wiki/BugTriage for the rest (this
   is MUST read for those who have not done it yet)

When we are more or less done with triaging, we can start proposing fixes
for bugs. I suggest to extensively use #fuel-dev IRC for synchronization,
and while someone fixes some bugs - the other one can participate in review
of fixes. Don't hesitate to ask for code reviews.

Regards,
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-15 Thread wu jiang
+1. I got lots of helpful assistance from him. :)


On Sun, Jun 15, 2014 at 8:44 PM, Alex Xu x...@linux.vnet.ibm.com wrote:

 +1


 On 2014年06月14日 06:40, Michael Still wrote:

 Greetings,

 I would like to nominate Ken'ichi Ohmichi for the nova-core team.

 Ken'ichi has been involved with nova for a long time now.  His reviews
 on API changes are excellent, and he's been part of the team that has
 driven the new API work we've seen in recent cycles forward. Ken'ichi
 has also been reviewing other parts of the code base, and I think his
 reviews are detailed and helpful.

 Please respond with +1s or any concerns.

 References:

https://review.openstack.org/#/q/owner:ken1ohmichi%
 2540gmail.com+status:open,n,z

https://review.openstack.org/#/q/reviewer:ken1ohmichi%
 2540gmail.com,n,z

http://www.stackalytics.com/?module=nova-groupuser_id=oomichi

 As a reminder, we use the voting process outlined at
 https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
 core team.

 Thanks,
 Michael



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-15 Thread Dan Prince
On Sat, 2014-06-14 at 08:40 +1000, Michael Still wrote:
 Greetings,
 
 I would like to nominate Ken'ichi Ohmichi for the nova-core team.
 
 Ken'ichi has been involved with nova for a long time now.  His reviews
 on API changes are excellent, and he's been part of the team that has
 driven the new API work we've seen in recent cycles forward. Ken'ichi
 has also been reviewing other parts of the code base, and I think his
 reviews are detailed and helpful.
 
 Please respond with +1s or any concerns.

+1

 
 References:
 
   
 https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z
 
   https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z
 
   http://www.stackalytics.com/?module=nova-groupuser_id=oomichi
 
 As a reminder, we use the voting process outlined at
 https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
 core team.
 
 Thanks,
 Michael
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to assign dynamic property to a Data Object via VMware SDK

2014-06-15 Thread Feng Xi Yan
Hi, Fellows,


Anyone familiar with VMWARE SDK?


I am now trying to cerate a network(or a dv port group) in vcenter via VMware 
SDK.


When I create a dvpg, I need to set vlan to config spec's 
defaultPortConfig(Type DO:DVPortSetting), but vlan is a dynamic property of 
DO:DVPortSetting.


I tried to directly assign a VmwareDistributedVirtualSwitchVlanIdSpec MOR to 
defaultPortConfig.vlan  property, but the task failed with error:
oslo.vmware.exceptions.VimException: Exception in CreateDVPortgroup_Task.
Cause: Type not found: 'vlan'


I thought this is not the right way.
Anybody has any clues?


Here is sample code I tried:
client_factory = self._session.vim.client.factory
config_spec = client_factory.create('ns0:DVPortgroupConfigSpec')
port_config_spec = client_factory.create('ns0:DVPortSetting')
vlan_spec = 
client_factory.create('ns0:VmwareDistributedVirtualSwitchVlanIdSpec')
vlan_spec.vlanId = 100
vlan_spec.inherited = 'True'
port_config_spec.vlan = vlan_spec
config_spec.name = 'test_21'
config_spec.description = 'test'
config_spec.numPorts = '100'
config_spec.autoExpand = 'True'
config_spec.type = 'earlyBinding'
config_spec.defaultPortConfig = port_config_spec
dvs = vmware_util.get_dvs(self._session, CONF.VMWARE.dvswitch)
pg_create_task = self._session.invoke_api(self._session.vim,
  CreateDVPortgroup_Task,
  dvs, spec=config_spec)
result = self._session.wait_for_task(pg_create_task)
dvpg = result.result
print(dvpg)___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-15 Thread Huangtianhua
+1, congratulation:)

-邮件原件-
发件人: Michael Still [mailto:mi...@stillhq.com] 
发送时间: 2014年6月14日 6:41
收件人: OpenStack Development Mailing List
主题: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

Greetings,

I would like to nominate Ken'ichi Ohmichi for the nova-core team.

Ken'ichi has been involved with nova for a long time now.  His reviews on API 
changes are excellent, and he's been part of the team that has driven the new 
API work we've seen in recent cycles forward. Ken'ichi has also been reviewing 
other parts of the code base, and I think his reviews are detailed and helpful.

Please respond with +1s or any concerns.

References:

  
https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z

  https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z

  http://www.stackalytics.com/?module=nova-groupuser_id=oomichi

As a reminder, we use the voting process outlined at 
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our core team.

Thanks,
Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-15 Thread INOUE TOMOKO

+1 !!! :-)

(2014/06/14 7:40), Michael Still wrote:

Greetings,

I would like to nominate Ken'ichi Ohmichi for the nova-core team.

Ken'ichi has been involved with nova for a long time now.  His reviews
on API changes are excellent, and he's been part of the team that has
driven the new API work we've seen in recent cycles forward. Ken'ichi
has also been reviewing other parts of the code base, and I think his
reviews are detailed and helpful.

Please respond with +1s or any concerns.

References:

   
https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z

   https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z

   http://www.stackalytics.com/?module=nova-groupuser_id=oomichi

As a reminder, we use the voting process outlined at
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
core team.

Thanks,
Michael




--
Tomoko Inoue
NTT Software Innovation Center / NTT Secure Platform Laboratories
e-mail: inoue.tom...@lab.ntt.co.jp
Telephone: +81 422 59 3496
3-9-11, Midori-Cho, Musashino-shi, Tokyo 180-8585, Japan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova-compute stucking at spawning

2014-06-15 Thread abhishek jain
Hi

I have installed openstack using devstack.I'm able to boot VM from
openstack cloud on the controller node.When I'm trying to boot VM on the
compute node,it is stucking at spawning state.The logs of the nova-compute
on the controller node is almost the same as that of of the nova-compute on
the compute node except one difference i.e the


', '/usr/bin/tee: /sys/class/net/tapd836ffc38-f2/brport/hairpin_mode: No
such file or directory\n')
2014-06-13 09:45:17.314 DEBUG nova.virt.libvirt.driver
[req-5abd5544-03a6-4046-99
267-65912a4ce477 admin admin] [instance:
46c68bd3-455b-4997-a9a0-8bd04de3da51] II
nstance is running spawn /opt/stack/nova/nova/virt/libvirt/driver.py:2092^M


It is not able to get any tap directry in /sys/class/net path.

Please help regarding this .


Thanks
Abhishek Jain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-15 Thread YAMAMOTO Takashi
hi,

 My initial analysis of Neutron 3rd Party CI is here [1]. This was
 somewhat correlated with information from DriverLog [2], which was
 helpful to put this together.

i updated the etherpad for ofagent.
currently a single CI system is running tests for both of ofagent and ryu.
is it ok?

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-15 Thread Angus Lees
On Fri, 13 Jun 2014 09:40:30 AM Matthew Booth wrote:
 On 12/06/14 21:38, Joshua Harlow wrote:
  So just a few thoughts before going to far down this path,
  
  Can we make sure we really really understand the use-case where we think
  this is needed. I think it's fine that this use-case exists, but I just
  want to make it very clear to others why its needed and why distributing
  locking is the only *correct* way.
 
 An example use of this would be side-loading an image from another
 node's image cache rather than fetching it from glance, which would have
 very significant performance benefits in the VMware driver, and possibly
 other places. The copier must take a read lock on the image to prevent
 the owner from ageing it during the copy. Holding a read lock would also
 assure the copier that the image it is copying is complete.

For this particular example, taking a lock every time seems expensive.  An 
alternative would be to just try to read from another node, and if the result 
wasn't complete+valid for whatever reason then fallback to reading from 
glance.

  * What happens when a node goes down that owns the lock, how does the
  software react to this?
 
 This can be well defined according to the behaviour of the backend. For
 example, it is well defined in zookeeper when a node's session expires.
 If the lock holder is no longer a valid node, it would be fenced before
 deleting its lock, allowing other nodes to continue.
 
 Without fencing it would not be possible to safely continue in this case.

So I'm sorry for explaining myself poorly in my earlier post.  I think you've 
just described waiting for the lock to expire before another node can take it, 
which is just a regular lock behaviour.  What additional steps do you want 
Fence() to perform at this point?

(I can see if the resource provider had some form of fencing, then it could do 
all sorts of additional things - but I gather your original use case is 
exactly where that *isn't* an option)


If the lock was allowed to go stale and not released cleanly, then we should 
forcibly reboot the stale instance before allowing the lock to be held again 
shouldn't be too hard to add.

- Is this just rebooting the instance sufficient for similar situations or 
would 
we need configurable actions?
- Which bot do we trust to issue the reboot command?

From the locking service pov, I can think of several ways to implement this, 
so we probably want to export a high-level operation and allow the details to 
vary to suit the underlying locking implementation.

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev