Re: [openstack-dev] [heat] Stack update and raw_template backup

2014-07-30 Thread Anant Patil
On 28-Jul-14 22:37, Clint Byrum wrote:
 Excerpts from Zane Bitter's message of 2014-07-28 07:25:24 -0700:
 On 26/07/14 00:04, Anant Patil wrote:
 When the stack is updated, a diff of updated template and current
 template can be stored to optimize database.  And perhaps Heat should
 have an API to retrieve this history of templates for inspection etc.
 when the stack admin needs it.

 If there's a demand for that feature we could implement it, but it 
 doesn't easily fall out of the current implementation any more.
 
 We are never going to do it even 1/10th as well as git. In fact we won't
 even do it 1/0th as well as CVS.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Zane,
I am working the defect you had filed, which would clean up backup stack
along with the resources, templates and other data.

However, I simply don't want to delete the templates for the same reason
as we don't hard-delete the stack. Anyone who deploys a stack and
updates it over time would want to view the the updates in the templates
for debugging or auditing reasons. It is not fair to assume that every
user has a VCS with him to store the templates. It is kind of
inconvenience for me to not have the ability to view my updates in
templates.

We need not go as far as git or any VCS. Any library which can do a diff
and patch of text files can be used, like the google-diff-match-patch.

- Anant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] oslo.utils 0.1.1 released

2014-07-30 Thread Flavio Percoco
On 07/29/2014 07:24 PM, Davanum Srinivas wrote:
 The Oslo team is pleased to announce the first release of oslo.utils,
 the library that replaces several utils modules from oslo-incubator:
 https://github.com/openstack/oslo.utils/tree/master/oslo/utils
 
 The new library has been uploaded to PyPI, and there is a changeset in
 the queue update the global requirements list and our package mirror:
 https://review.openstack.org/#/c/110380/
 
 Documentation for the library is available on our developer docs site:
 http://docs.openstack.org/developer/oslo.utils/
 
 The spec for the graduation blueprint includes some advice for
 migrating to the new library:
 http://git.openstack.org/cgit/openstack/oslo-specs/tree/specs/juno/graduate-oslo-utils.rst
 
 Please report bugs using the Oslo bug tracker in launchpad:
 http://bugs.launchpad.net/oslo
 
 Thanks to everyone who helped with reviews and patches to make this
 release possible!
 
 Thanks,
 dims

wt, another step forward!

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Blueprint review request: Suggest Ways to Add Contextual Help to Horizon

2014-07-30 Thread Gundimeda, Raghuram
Greetings to All!

Your review / feedback is much awaited on the proposals in this blueprint.

Just wanted to let you know that we are planning to start implementing design 
#3 later next week in case there isn't any further advice from you.

Appreciate your time here...

Many thanks!
Raghu

From: Gundimeda, Raghuram
Sent: Wednesday, July 09, 2014 2:40 PM
To: 'openstack-dev@lists.openstack.org'
Subject: [openstack-dev] [Horizon] [UX] Blueprint review request: Suggest Ways 
to Add Contextual Help to Horizon

Hi All,

Greetings!

I tried to compile and add 3 more design thoughts to Liz Blanchard's existing 
proposal addressing the blueprint Suggest Ways to Add Contextual Help to 
Horizonhttps://blueprints.launchpad.net/openstack-ux/+spec/contextual-help-horizon.
 One goal of this effort is to arrive at a common interaction solution to 
provide contextual help for all types of input elements (input box, drop list, 
checkbox, etc.).

Request, please review the proposals, and let us know your valuable feedback 
that will help us decide one among them. Precisely (in Liz's words), we would 
like to seek your opinion on - whether or not it is very important for the 
help text to always be showing. This will help us determine which design 
approach to take further - designs 12 or design 3 where the user doesn't see 
the help text until he gets near the input field.

You may access the design docs (prototype, write-up) on the blueprint 
launchpad, or download them here:
Prototype: 
https://www.dropbox.com/s/ei56d8ffggs1h51/ContextualHelpInHorizon_Proposal_v1.zip
 (Please unzip and open start.html / index.html in your browser to run the 
interactive prototype. Do let me know if you face any issues here)
Doc: 
https://www.dropbox.com/s/fzitkei90i31kti/Ways%20to%20Add%20Contextual%20Help%20to%20Horizon.pdf

Thanks very much for your time!

Best Regards,
Raghu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ova support in glance

2014-07-30 Thread Flavio Percoco
On 07/30/2014 12:57 AM, Bhandaru, Malini K wrote:
 Hello Everyone!
 
 We were discussing the following blueprint in Glance:
 Enhanced-Platform-Awareness-OVF-Meta-Data-Import 
 :https://review.openstack.org/#/c/104904/
 
 The OVA format is very rich and the proposal here in its first incarnation is 
 to essentially Untar the ova package, andimport the first disk image therein 
 and parse the ovf file and attach meta data to the disk image.
 There is a nova effort  in a similar vein that supports OVA, limiting its 
 availability to the VMWare hypervisor. Our efforts will combine.
 
 The issue that is raised is how many openstack users and OpenStack cloud 
 providers tackle OVA data with multiple disk images, using them as an 
 application.
 Do your users using OVA with content other than 1 disk image + OVF? 
 That is does it have other files that are used? Do any of you use OVAs with 
 snapshot chains?
 Would this solution path break your system, result in unhappy users?  
 
 
 If the solution will at least address 50% of the use cases, a low bar, and 
 ease deploying NFV applications, this would be worthy.
 If so, how would we message around this so as not to imply that OpenStack 
 supports OVA in its full glory?
 
 Down the road the Artefacts blueprint will provide a place holder for OVA. 
 Perhaps even the OVA format may be transformed into a Heat template to work 
 in OpenStack.
 
 Please do prov ide us your feedback.


Hey,

Thanks for your efforts and interest in this area.

We've discussed this in the past - sorry, I don't have linking to the
previous discussions - and the results of these discussions led to just
wait until we have a better template management in Glance. Artifacts
were also indirectly supported by this idea of having a better way to
store these multi-image templates in Glance.

After taking a look at the proposed spec, I'm even more hesitant to
letting it land. The reason being that changes proposed there would fit
in the work going on in the Artifacts blueprint. Moreover, the proposed
spec suggests extending the ImageProperty model, which I'd really prefer
not to extend.

In the future - probably L - we'd like to convert images to artifacts,
which means the work related to this blueprint will be gone and we'll
have to translate it to artifacts anyway.

With all that said, there's the other issue you mentioned in your email.
People will expect it to be fully implemented and environments depending
on multi-disk OVAs will find it useless.

Don't get me wrong, I do want to see OVF support in Glance, I just don't
think a custom support for it is the right way to do it.

Thoughts?
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Flavio Percoco
On 07/29/2014 09:01 PM, Russell Bryant wrote:
 On 07/29/2014 12:12 PM, Daniel P. Berrange wrote:
 Sure there was some debate about what criteria were desired acceptance
 when stable trees were started. Once the criteria are defined I don't
 think it is credible to say that people are incapable of following the
 rules. In the unlikely event that people were to willfully ignore the
 agreed upon rules for stable tree, then I'd not trust them to be part
 of a core team working on any branch at all. With responsibility comes
 trust and an acceptance to follow the agreed upon processes.
 
 I agree with this.  If we can't trust someone on *-core to follow the
 stable criteria, then they shouldn't be on *-core in the first place.
 Further, if we can't trust the combination of *two* people from *-core
 to approve a stable backport, then we're really in trouble.
 

+1

As a stable-maint, I'm always hesitant to review patches I've no
understanding on, hence I end up just checking how big is the patch,
whether it adds/removes new configuration options etc but, the real
review has to be done by someone with good understanding of the change.

Something I've done in the past is adding the folks that had approved
the patch on master to the stable/maint review. They should know that
code already, which means it shouldn't take them long to review it. All
the sanity checks should've been done already.

With all that said, I'd be happy to give *-core approval permissions on
stable branches, but I still think we need a dedicated team that has a
final (or at least relevant) word on the patches.

Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Generate Event or Notification in Ceilometer

2014-07-30 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Hi Jay,



Thanks for your comment. You suggestion is good but I am wondering why

we cannot use or leverage Ceilometer to monitor infrastructure-related,

as it can used to monitor tenant-related things.



Regards,

Gary



On 07/29/2014 02:05 AM, Duan, Li-Gong (Gary at 
HPServers-Core-OE-PSChttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev)
 wrote:

 Hi Folks,



 Are there any guide or examples to show how to produce a new event or

 notification add add a handler for this event in ceilometer?



 I am asked to implement OpenStack service monitoring which will send an

 event and trigger the handler once a service, say nova-compute, crashes,

 in a short time. L



 The link (http://docs.openstack.org/developer/ceilometer/events.html)

 does a good job on the explanation of concept and hence I know that I

 need to emit notification to message queue and ceilometer-collector will

 process them and generate events but it is far from real implementations.



I would not use Ceilometer for this, as it is more tenant-facing than

infrastructure service facing. Instead, I would use a tried-and-true

solution like Nagios and NRPE checks. Here's an example of such a check

for a keystone endpoint:



https://github.com/ghantoos/debian-nagios-plugins-openstack/blob/master/plugins/check_keystone



Best,

-jay


From: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Sent: Tuesday, July 29, 2014 5:05 PM
To: openstack-dev@lists.openstack.org
Subject: [Ceilometer] Generate Event or Notification in Ceilometer

Hi Folks,

Are there any guide or examples to show how to produce a new event or 
notification add add a handler for this event in ceilometer?

I am asked to implement OpenStack service monitoring which will send an event 
and trigger the handler once a service, say nova-compute, crashes, in a short 
time. :(
The link (http://docs.openstack.org/developer/ceilometer/events.html) does a 
good job on the explanation of concept and hence I know that I need to emit 
notification to message queue and ceilometer-collector will process them and 
generate events but it is far from real implementations.

Regards,
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] HTTPS client breaks nova

2014-07-30 Thread Flavio Percoco
On 07/23/2014 06:05 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 It looks like the switch to requests in python-glanceclient
 (https://review.openstack.org/#/c/78269/) has broken nova when SSL is
 enabled.

 I think it is related to the custom object that the glanceclient uses.
 If another connection gets pushed into the pool then things fail because
 the object isn't a glanceclient VerifiedHTTPSConnection object.

 The error seen is:

 2014-07-22 16:20:57.571 ERROR nova.api.openstack
 req-e9a94169-9af4-45e8-ab95-1ccd3f8caf04 admin admin Caught error:
 VerifiedHTTPSConnection instance has no attribute 'insecure'

 What I see is that nova works until glance is invoked.

 These all work:

 $ nova flavor-list
 $ glance image-list
 $ nova net-list

 Now make it go boom:

 $ nova image-list
 ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
 req-ee964e9a-c2a9-4be9-bd52-3f42c805cf2c)

 Now that a bad object is now in the pool nothing in nova works:

 $ nova list
 ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
 req-f670db83-c830-4e75-b29f-44f61ae161a1)

 A restart of nova gets things back to normal.

 I'm working on enabling SSL everywhere
 (https://bugs.launchpad.net/devstack/+bug/1328226) either directly or
 using TLS proxies (stud).
 I'd like to eventually get SSL testing done as a gate job which will
 help catch issues like this in advance.

 rob
 
 FYI, my temporary workaround is to change the queue name (scheme) so the
 glance clients are handled separately:
 
 diff --git a/glanceclient/common/https.py b/glanceclient/common/https.py
 index 6416c19..72ed929 100644
 --- a/glanceclient/common/https.py
 +++ b/glanceclient/common/https.py
 @@ -72,7 +72,7 @@ class HTTPSAdapter(adapters.HTTPAdapter):
  def __init__(self, *args, **kwargs):
  # NOTE(flaper87): This line forces poolmanager to use
  # glanceclient HTTPSConnection
 -poolmanager.pool_classes_by_scheme[https] = HTTPSConnectionPool
 +poolmanager.pool_classes_by_scheme[glance_https] =
 HTTPSConnectionPoo
  super(HTTPSAdapter, self).__init__(*args, **kwargs)
 
  def cert_verify(self, conn, url, verify, cert):
 @@ -92,7 +92,7 @@ class
 HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
  be used just when the user sets --no-ssl-compression.
  
 
 -scheme = 'https'
 +scheme = 'glance_https'
 
  def _new_conn(self):
  self.num_connections += 1
 
 This at least lets me continue working.
 
 rob

Hey Rob,

Sorry for the late reply, I'll take a look into this.

Cheers,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Boot from ISO feature status

2014-07-30 Thread Jean-Daniel Bonnetot
Hi Daniel,

For now (IceHouse), it's not possible to boot from iso and attach volume
after boot. Libvirt present ISO disk via an IDE controller (/dev/hda)
and new volume try to be added as IDE too (/dev/hdb).

Do you mean that with the new block device mapping code, we will be able
to boot from ISO (/dev/hda) and add a new SATA volume (/dev/sda)?

-- 
Jean-Daniel

 On Mon, Jul 28, 2014 at 09:30:24AM -0700, Vishvananda Ishaya wrote:
  I think we should discuss adding/changing this functionality. I have had
  many new users assume that booting from an iso image would give them a
  root drive which they could snapshot. I was hoping that the new block
  device mapping code would allow something like this, but unfortunately
  there isn’t a way to do it there either. You can boot a flavor with an
  ephemeral drive, but there is no command to snapshot secondary drives.


 The new block device mapping code is intended to ultimately allow any
 disk configuration you can imagine, so if a desirable setup with CDROM
 vs disks does not work, do file a bug about this because we should
 definitely address it.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o-
http://virt-manager.org :|
 |: http://autobuild.org   -o-
http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-
http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Not support dnsmasq 2.63?

2014-07-30 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 30/07/14 04:00, Kyle Mestery wrote:
 I'm personally ok with this hard limit, but I'd really like to
 hear from distribution people here to understand their thoughts,
 including what versions of dnsmasq ship with their products and how
 this would affect them.

On Red Hat side, we are planning to ship Juno based build for RHEL 7
only. Currently, we already ship dnsmasq-2.66-10.el7 with Icehouse
based build (RHEL7-OSP5), so we should be ok with dropping support for
older versions.

Cheers,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJT2LGYAAoJEC5aWaUY1u57f/sH/j7xaMGoZApJ29mXdSgGRqFb
CdWZQTeLcPvProQ9XPzFVO5ic1SkoJlwCvBBskswuG558qxamYRIataC11GhoiJD
fwFHxlt/830rnnZXUD9r4wKq/yXWgEnQA/B9B7R7K7vfQNqbsg5ZZL6pq5mUMatM
yV+SK13G3irqk6Fc0ygNH2UAhw8IBZDadxic/dvfoUZ5vHLJJc+FIMvFv0mXAF52
7dxhsUgzQtNNaISXcK6q++sb9eRFbAxY/vh645eww9cAMsgllwwo/O+TeKtUgcE0
EB2spDkfY8rRjWvf8bIqGSQNz1k0IXWj6rHRYrshARwMAc5a1to9SFMshTXsn8w=
=bKfw
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug discussion at mid cycle meet up

2014-07-30 Thread Daniel P. Berrange
On Tue, Jul 29, 2014 at 02:48:12PM -0400, Russell Bryant wrote:
 On 07/29/2014 11:43 AM, Tracy Jones wrote:
  3.  We have bugs that are really not bugs but features, or performance
  issues.  They really should be a BP not a bug, but we don’t want these
  things to fall off the radar so they are bugs… But we don’t really know
  what to do with them.  Should they be closed?  Should they have a
  different category – like feature request??  Perhaps they should just be
  wish list??
 
 I don't think blueprint are appropriate for tracking requests.  They
 should only be created when someone is proposing actually doing the work.

Agreed, we really do not want to see blueprints created for people
who are just doing 'drive by' feature requests.

 I think Wishlist is fine for keeping a list of requests.  That's what
 I've been using it for.

Yep, wishlist works fine IMHO - it is a nice low overhead approach
for users wanting to report these kind of items, with a tool they
are already familiar with.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 29/07/14 18:12, Daniel P. Berrange wrote:
 On Tue, Jul 29, 2014 at 08:30:09AM -0700, Jay Pipes wrote:
 On 07/29/2014 06:13 AM, Daniel P. Berrange wrote:
 On Tue, Jul 29, 2014 at 02:04:42PM +0200, Thierry Carrez
 wrote:
 Ihar Hrachyshka a écrit : At the dawn of time there were no
 OpenStack stable branches, each distribution was maintaining
 its own stable branches, duplicating the backporting work. At
 some point it was suggested (mostly by RedHat and Canonical
 folks) that there should be collaboration around that task, 
 and the OpenStack project decided to set up official stable
 branches where all distributions could share the backporting
 work. The stable team group was seeded with package
 maintainers from all over the distro world.
 
 So these branches originally only exist as a convenient place
 to collaborate on backporting work. This is completely
 separate from development work, even if those days backports
 are often proposed by developers themselves. The stable
 branch team is separate from the rest of OpenStack teams. We
 have always been very clear tht if the stable branches are no
 longer maintained (i.e. if the distributions don't see the
 value of those anymore), then we'll consider removing them.
 We, as a project, only signed up to support those as long as
 the distros wanted them.
 
 We have been adding new members to the stable branch teams
 recently, but those tend to come from development teams
 rather than downstream distributions, and that starts to bend
 the original landscape. Basically, the stable branch needs to
 be very conservative to be a source of safe updates --
 downstream distributions understand the need to weigh the
 benefit of the patch vs. the disruption it may cause. 
 Developers have another type of incentive, which is to get
 the fix they worked on into stable releases, without
 necessarily being very conservative. Adding more -core people
 to the stable team to compensate the absence of distro
 maintainers will ultimately kill those branches.
 
 The situation I'm seeing is that the broader community believe
 that the Nova core team is responsible for the nova stable
 branches. When stuff sits in review for ages it is the core
 team that is getting pinged about it and on the receiving end
 of the complaints the inaction of review.
 
 Adding more people to the stable team won't kill those
 branches. I'm not suggesting we change the criteria for
 accepting patches, or that we dramatically increase the number
 of patches we accept. There is clearly alot of stuff proposed
 to stable that the existing stable team think is a good idea -
 as illustrated by the number of patches with at least one +2
 present. On the contrary, having a bigger stable team comprises
 all of core + interested distro maintainers will ensure that
 the stable branches are actually gettting the patches people
 in the field need to provide a stable cloud.
 
 -1
 
 In my experience, the distro maintainers who pioneered the stable
 branch teams had opposite viewpoints to core teams in regards to
 what was appropriate to put into a stable release. I think it's
 dangerous to populate the stable team with the core team members
 just because of long review and merge times.
 
 Sure there was some debate about what criteria were desired
 acceptance when stable trees were started. Once the criteria are
 defined I don't think it is credible to say that people are
 incapable of following the rules. In the unlikely event that people
 were to willfully ignore the agreed upon rules for stable tree,
 then I'd not trust them to be part of a core team working on any
 branch at all. With responsibility comes trust and an acceptance to
 follow the agreed upon processes.

Still, it's quite common to see patches with no cherry-pick info, or
Conflicts section removed, or incorrect Change-Id being approved and
pushed. It's also common for people to review backports as if they are
sent against master (with all nit picking, unit test changes
requested, no consideration for stable branch applicability...)

 
 Regards, Daniel
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJT2LfQAAoJEC5aWaUY1u570woIAME9gwi7MlgqhXnxMwbnn+fv
Exrh3zSXfCvxQ76F5z0CVxu04bItdI4ZfYOU+/Emxx+ay0evGhHPP5jjvUYhsbED
h6Nqf/yefLQZkhuwKTNyiV3FObOsPG+Cu6yAYn8y/eUyGypQfx5oXLSof+GkzMhf
bMtVdxJawfWyrUJjuQpaAbZbpGzCDjqoBsZK+28RMxraBJDsz2qKpzrUp5xH71Wb
NuyKSzM5/wsbW6NtUeRESJ7HtvT8kcl1CbYAkTvbjpY4+zN2Xe63Tf62cKC30eli
XPP3feaPPaudUznfRLhVqRB1Yel/f0+wyj7p9NnO6cASOx2OO1O4GCI3+E2ElnY=
=hP7m
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Daniel P. Berrange
On Wed, Jul 30, 2014 at 11:16:00AM +0200, Ihar Hrachyshka wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 On 29/07/14 18:12, Daniel P. Berrange wrote:
  On Tue, Jul 29, 2014 at 08:30:09AM -0700, Jay Pipes wrote:
  On 07/29/2014 06:13 AM, Daniel P. Berrange wrote:
  On Tue, Jul 29, 2014 at 02:04:42PM +0200, Thierry Carrez
  wrote:
  Ihar Hrachyshka a écrit : At the dawn of time there were no
  OpenStack stable branches, each distribution was maintaining
  its own stable branches, duplicating the backporting work. At
  some point it was suggested (mostly by RedHat and Canonical
  folks) that there should be collaboration around that task, 
  and the OpenStack project decided to set up official stable
  branches where all distributions could share the backporting
  work. The stable team group was seeded with package
  maintainers from all over the distro world.
  
  So these branches originally only exist as a convenient place
  to collaborate on backporting work. This is completely
  separate from development work, even if those days backports
  are often proposed by developers themselves. The stable
  branch team is separate from the rest of OpenStack teams. We
  have always been very clear tht if the stable branches are no
  longer maintained (i.e. if the distributions don't see the
  value of those anymore), then we'll consider removing them.
  We, as a project, only signed up to support those as long as
  the distros wanted them.
  
  We have been adding new members to the stable branch teams
  recently, but those tend to come from development teams
  rather than downstream distributions, and that starts to bend
  the original landscape. Basically, the stable branch needs to
  be very conservative to be a source of safe updates --
  downstream distributions understand the need to weigh the
  benefit of the patch vs. the disruption it may cause. 
  Developers have another type of incentive, which is to get
  the fix they worked on into stable releases, without
  necessarily being very conservative. Adding more -core people
  to the stable team to compensate the absence of distro
  maintainers will ultimately kill those branches.
  
  The situation I'm seeing is that the broader community believe
  that the Nova core team is responsible for the nova stable
  branches. When stuff sits in review for ages it is the core
  team that is getting pinged about it and on the receiving end
  of the complaints the inaction of review.
  
  Adding more people to the stable team won't kill those
  branches. I'm not suggesting we change the criteria for
  accepting patches, or that we dramatically increase the number
  of patches we accept. There is clearly alot of stuff proposed
  to stable that the existing stable team think is a good idea -
  as illustrated by the number of patches with at least one +2
  present. On the contrary, having a bigger stable team comprises
  all of core + interested distro maintainers will ensure that
  the stable branches are actually gettting the patches people
  in the field need to provide a stable cloud.
  
  -1
  
  In my experience, the distro maintainers who pioneered the stable
  branch teams had opposite viewpoints to core teams in regards to
  what was appropriate to put into a stable release. I think it's
  dangerous to populate the stable team with the core team members
  just because of long review and merge times.
  
  Sure there was some debate about what criteria were desired
  acceptance when stable trees were started. Once the criteria are
  defined I don't think it is credible to say that people are
  incapable of following the rules. In the unlikely event that people
  were to willfully ignore the agreed upon rules for stable tree,
  then I'd not trust them to be part of a core team working on any
  branch at all. With responsibility comes trust and an acceptance to
  follow the agreed upon processes.
 
 Still, it's quite common to see patches with no cherry-pick info, or
 Conflicts section removed, or incorrect Change-Id being approved and
 pushed. It's also common for people to review backports as if they are
 sent against master (with all nit picking, unit test changes
 requested, no consideration for stable branch applicability...)

As said before, if people reviewing the code consistently fail to
follow the agreed review guidelines, then point out what they're
doing wrong and if they fail to improve, remove their +2 privileges.
There's nothing unique about stable branches in this regard. If
people reviewing master consistently did the wrong thing they'd
have their +2 removed too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|


Re: [openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-30 Thread Michele Paolino

On 30/07/2014 07:53, Matt Riedemann wrote:



On 7/25/2014 2:38 PM, Clark Boylan wrote:

Hello,

The recent release of tox 1.7.2 has fixed the {posargs} interpolation
issues we had with newer tox which forced us to be pinned to tox==1.6.1.
Before we can remove the pin and start telling people to use latest tox
we need to address a new default behavior in tox.

New tox sets a random PYTHONHASHSEED value by default. Arguably this is
a good thing as it forces you to write code that handles unknown hash
seeds, but unfortunately many projects' unittests don't currently deal
with this very well. A work around is to hard set a PYTHONHASHSEED of 0
in tox.ini files. I have begun to propose these changes to the projects
that I have tested and found to not handle random seeds. It would be
great if we could get these reviewed and merged so that infra can update
the version of tox used on our side.

I probably won't be able to test every single project and propose fixes
with backports to stable branches for everything. It would be a massive
help if individual projects tested and proposed fixes as necessary too
(these changes will need to be backported to stable branches). You can
test by running `tox -epy27` in your project with tox version 1.7.2. If
that fails add PYTHONHASHSEED=0 as in
https://review.openstack.org/#/c/109700/ and rerun `tox -epy27` to
confirm that succeeds.

This will get us over the immediate hump of the tox upgrade, but we
should also start work to make our tests run with random hashes. This
shouldn't be too hard to do as it will be a self gating change once
infra is able to update the version of tox used in the gate. Most of the
issues appear related to dict entry ordering. I have gone ahead and
created https://bugs.launchpad.net/cinder/+bug/1348818 to track this
work.

Thank you,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is this in any way related to the fact that tox is unable to 
find/install the oslo alpha packages for me in nova right now (config, 
messaging, rootwrap) after I rebased on master?  I had to go into 
requirements.txt and remove the min versions on the alpha versions to 
get tox to install dependencies for nova unit tests. I'm running with 
tox 1.6.1 but not sure if that would be related anyhow.


Problem confirmed from my side. The error is:
Downloading/unpacking oslo.config=1.4.0.0a3 (from -r 
/media/repos/nova/requirements.txt (line 34))
  Could not find a version that satisfies the requirement 
oslo.config=1.4.0.0a3 (from -r /media/repos/nova/requirements.txt (line 
34)) (from versions: 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.3.0)


--
Michele Paolino


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-30 Thread Daniel P. Berrange
On Fri, Jul 25, 2014 at 02:38:49PM -0700, Clark Boylan wrote:
 Hello,
 
 The recent release of tox 1.7.2 has fixed the {posargs} interpolation
 issues we had with newer tox which forced us to be pinned to tox==1.6.1.
 Before we can remove the pin and start telling people to use latest tox
 we need to address a new default behavior in tox.
 
 New tox sets a random PYTHONHASHSEED value by default. Arguably this is
 a good thing as it forces you to write code that handles unknown hash
 seeds, but unfortunately many projects' unittests don't currently deal
 with this very well. A work around is to hard set a PYTHONHASHSEED of 0
 in tox.ini files. I have begun to propose these changes to the projects
 that I have tested and found to not handle random seeds. It would be
 great if we could get these reviewed and merged so that infra can update
 the version of tox used on our side.
 
 I probably won't be able to test every single project and propose fixes
 with backports to stable branches for everything. It would be a massive
 help if individual projects tested and proposed fixes as necessary too
 (these changes will need to be backported to stable branches). You can
 test by running `tox -epy27` in your project with tox version 1.7.2. If
 that fails add PYTHONHASHSEED=0 as in
 https://review.openstack.org/#/c/109700/ and rerun `tox -epy27` to
 confirm that succeeds.

NB you don't even need to have tox 1.7.2 to validate this. Just running

  PYTHONHASHSEED=666 ./run_tests.sh

was sufficient to show the failures in Nova.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Uniform style of README file for contrib resources

2014-07-30 Thread Sergey Kraynev
Hello guys.

In the last time I meet again change related with changing content of
README for Docker resource.

https://review.openstack.org/#/c/110541/
https://review.openstack.org/#/c/101144/

Both changes try to improve current description.
I looked on other README files in contrib directories and met that all
instructions try to explain
information which is available here [1].
This and some other points pushed me on the follow ideas:

- Should we provide some commands like in docker's README which will add
required path to plugin_dirs ?

- Should we have several specific interpretations of [1] or will be better
to add reference on existing guide and
mention that some really specific notes?

- Should we leave empty sections? (For example, [2])

- Should we add README for barbican resources?

- How about one uniform template for README files?
I think that the right way to have list of allowed sections for README with
fixed names.
In my opinion it helps other developers and users with using all contrib
resources, because they will know what find
in each section.

I suggest to use follow structure (Note: if section is empty you just
should not use this section):

# Title with name of resource or for what this resource will be used
 (After this title you should provide description of resource)
## Resources - constant name. This section will  be used if plugin
directory contains more then one resource (F.e. rackspase resources)
# Installation   - constant name. What we should do for using this
plugin. (Possible will be enough to add link [1] instead sections below)
## Changes in configuration  - constant name. Which files and How we
should change them. (Possible will  be enough to add link [1])
## Restarting services - constant name. Names of services,
which should be restarted.
# Examples  - constant name. Section for
providing template examples (not full template, just definition of contrib
resource)
# Known issues  - constant name. All related issues.
# How it works - constant name. If you want to
tell some mechanism about this resource.
# Notes- constant name. Section for
information, which can not be classified for sections above.

I understand, that it's just README files, but I still think that it's
enough important for discussion. Hope on your thoughts.


[1]
https://wiki.openstack.org/wiki/Heat/Plugins#Installation_and_Configuration
[2] https://github.com/openstack/heat/tree/master/contrib/rackspace

Regards,
Sergey.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Configuration Groups CLI improvements

2014-07-30 Thread Denis Makogon
Hello, Stackers.





Since Trove gives an ability to create post-deployment configuration for
instances, it would be nice to have an ability to pass database
configuration file location through configuration file location.

I’d like to propose feature that would improve usability of CLI for
configuration-create. To avoid text duplication i filed BP and wrote spec
for it


BP: https://blueprints.launchpad.net/trove/+spec/configuration-improvements

Wiki: https://wiki.openstack.org/wiki/Trove/ConfigurationShellImprovements


Any early feedbacks are appreciated.


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PKG-Openstack-devel] Bug#755315: [Trove] Should we stop using wsgi-intercept, now that it imports from mechanize? this is really bad!

2014-07-30 Thread Chris Dent

On Tue, 29 Jul 2014, Chris Dent wrote:


Let me know whenever you have a new release, without mechanize as new
dependency, or with it being optional.


It will be soon (a day or so).


https://pypi.python.org/pypi/wsgi_intercept is now at 0.8.0

All traces of mechanize removed. Have at. Enjoy. If there are issues
please post them in the github issues
https://github.com/cdent/python3-wsgi-intercept/issues first before
the openstack-dev list...

Please note that the long term plan is likely to be that _all_ the
interceptors will be removed and will be packaged as their own
packages with the core package only providing the faked socket and
environ infrastructure for the interceptors to use.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-30 Thread Daniel P. Berrange
On Fri, Jul 25, 2014 at 02:38:49PM -0700, Clark Boylan wrote:
 New tox sets a random PYTHONHASHSEED value by default. Arguably this is
 a good thing as it forces you to write code that handles unknown hash
 seeds, but unfortunately many projects' unittests don't currently deal
 with this very well. A work around is to hard set a PYTHONHASHSEED of 0
 in tox.ini files. I have begun to propose these changes to the projects
 that I have tested and found to not handle random seeds. It would be
 great if we could get these reviewed and merged so that infra can update
 the version of tox used on our side.

NB, the problem is not merely restricted to unit tests. The first test
failure I looked at turned out to be a problem in a database migration
script, so if anyone sets a non-zero PYTHONHASHSEED in production for
security reasons, they'd be exposed to non-deterministic behaviour too

  https://review.openstack.org/#/c/110605/

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Not support dnsmasq 2.63?

2014-07-30 Thread Rossella Sblendido
Hi Kyle,

SUSE Cloud ships dnsmasq 2.71 . For us it's fine to bump the minimum
version supported
to 2.63 .

cheers,

Rossella

On 07/30/2014 04:00 AM, Kyle Mestery wrote:
 On Tue, Jul 29, 2014 at 8:51 PM, Xuhan Peng pengxu...@gmail.com wrote:
 We bumped the minimum version of dnsmasq to 2.63 a while ago by this code
 change:

 https://review.openstack.org/#/c/105378/

 However, currently we still kind of support earlier version of dnsmasq
 because we only give a warning and don't exit the program when we find
 dnsmasq version is less than the minimum version. This causes some confusion
 and complicates the code since we need to take care different syntax of
 dnsmasq of different version in dhcp code (Note that the previous version
 doesn't support tag).

 I wonder what's your opinion on NOT supporting dnsmasq version less than
 2.63 in Juno? I think we can prompt error message and exit the program when
 we detect invalid version but I would like to gather more thoughts on this
 one.

 I'm personally ok with this hard limit, but I'd really like to hear
 from distribution people here to understand their thoughts, including
 what versions of dnsmasq ship with their products and how this would
 affect them.

 Thanks,
 Kyle

 Thanks,
 Xu Han

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec process

2014-07-30 Thread Thierry Carrez
James Slagle wrote:
 Finally, the juno-2 milestone has passed. Many (if not all?)
 integrated projects have already -2'd specs that have not been
 approved, indicating they are not going to make Juno. There are many
 valid reasons to do this: focus, stabilization, workload, etc.
 
 Personally, I don't feel like TripleO agreed or had discussion on this
 point as a community. I'm actually not sure right off (without digging
 through archives) if the spec freeze is an OpenStack wide process or
 for individual projects. And, if it is OpenStack wide, would that
 apply just to projects that are part of the integrated release.

Spec freezes are currently opt-in, even for integrated projects. Only a
handful of projects elected to strictly follow them this time. The key
benefit is to allow core reviewers to focus on the stuff that might
actually make it in time for release, rather than having to spend time
reviewing specs that have nearly no chance of being Juno material anyway.

So as far as TripleO goes, it really is a local team decision.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Survey on Token Provider Usage

2014-07-30 Thread Thierry Carrez
Morgan Fainberg wrote:
 The Keystone team is looking for feedback from the community on what type of 
 Keystone Token is being used in your OpenStack deployments. This is to help 
 us understand the use of the different providers and get information on the 
 reasoning (if possible) that that token provider is being used.
 
 Please use the survey link and let us know which release of OpenStack and 
 which Keystone Token type (UUID, PKI, PKIZ, something custom) you are using. 
 The results of this survey will have no impact on future support of any of 
 these types of Tokens, we plan to continue to support all of the current 
 token formats and the ability to use a custom token provider.
 
 https://www.surveymonkey.com/s/NZNDH3M 

Great!

I see you posted this on -dev and -operators... You should probably also
post this (or make sure it gets forwarded) on the openstack general ML.
I'd expect you'd get extra data points from users from there.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Uniform style of README file for contrib resources

2014-07-30 Thread Angus Salkeld
On Wed, 2014-07-30 at 13:41 +0400, Sergey Kraynev wrote:
 Hello guys.
 
 
 In the last time I meet again change related with changing content of
 README for Docker resource.
 
 
 
 https://review.openstack.org/#/c/110541/

I can ditch mine, just making a change after helping someone
on IRC through it.

 
 https://review.openstack.org/#/c/101144/
 
 
 
 Both changes try to improve current description.
 I looked on other README files in contrib directories and met that all
 instructions try to explain 
 information which is available here [1].
 This and some other points pushed me on the follow ideas:
 
 
 - Should we provide some commands like in docker's README which will
 add required path to plugin_dirs ?

IMO no, it is really simple and this just makes it look harder than it
is.

 
 
 - Should we have several specific interpretations of [1] or will be
 better to add reference on existing guide and
 mention that some really specific notes?
 
 
 - Should we leave empty sections? (For example, [2])
 
 
 - Should we add README for barbican resources?
 
 
 - How about one uniform template for README files?

can't we just have one contrib/README (at least for the installing
part)?

 I think that the right way to have list of allowed sections for README
 with fixed names.
 In my opinion it helps other developers and users with using all
 contrib resources, because they will know what find
 in each section.
 
 
 I suggest to use follow structure (Note: if section is empty you just
 should not use this section):
 
 
 # Title with name of resource or for what this resource will be used
  (After this title you should provide description of resource)
 ## Resources - constant name. This section will  be used if
 plugin directory contains more then one resource (F.e. rackspase
 resources)
 # Installation   - constant name. What we should do for using
 this plugin. (Possible will be enough to add link [1] instead sections
 below)
 ## Changes in configuration  - constant name. Which files and How
 we should change them. (Possible will  be enough to add link [1])
 ## Restarting services - constant name. Names of
 services, which should be restarted.
 # Examples  - constant name. Section for
 providing template examples (not full template, just definition of
 contrib resource)
 # Known issues  - constant name. All related
 issues.
 # How it works - constant name. If you want
 to tell some mechanism about this resource.
 # Notes- constant name. Section for
 information, which can not be classified for sections above.
  
 I understand, that it's just README files, but I still think that it's
 enough important for discussion. Hope on your thoughts.
 

yeah, they are inconsistent.

-Angus

 
 
 
 [1] 
 https://wiki.openstack.org/wiki/Heat/Plugins#Installation_and_Configuration
 [2] https://github.com/openstack/heat/tree/master/contrib/rackspace
 
 
 Regards,
 Sergey.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: This is a digitally signed message part
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Incubation request

2014-07-30 Thread Thierry Carrez
Swartzlander, Ben wrote:
 On Tue, 2014-07-29 at 13:38 +0200, Thierry Carrez wrote:
 Swartzlander, Ben a écrit :
 Manila has come a long way since we proposed it for incubation last autumn. 
 Below are the formal requests.

 https://wiki.openstack.org/wiki/Manila/Incubation_Application
 https://wiki.openstack.org/wiki/Manila/Program_Application

 Anyone have anything to add before I forward these to the TC?

 When ready, propose a governance change a bit like this one:

 https://github.com/openstack/governance/commit/52d9b4cf2f3ba9d0b757e16dc040a1c174e1d27e
 
 Thierry, does the governance change process replace the process of
 sending an email to the openstack-tc ML?

The procedure for TC matters is the following:

1. create a thread on -dev so that we can all openly discuss the request
2. make sure the TC notices the thread on -dev by posting a pointer to
the -dev thread to openstack-tc (separate emails, avoid cross-posting so
that the discussion stays on -dev)
3. when you think the discussion is sufficiently advanced, propose a
governance change to formally request a TC vote on the matter

Hope this helps.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Thierry Carrez
Russell Bryant wrote:
 On 07/29/2014 12:12 PM, Daniel P. Berrange wrote:
 Sure there was some debate about what criteria were desired acceptance
 when stable trees were started. Once the criteria are defined I don't
 think it is credible to say that people are incapable of following the
 rules. In the unlikely event that people were to willfully ignore the
 agreed upon rules for stable tree, then I'd not trust them to be part
 of a core team working on any branch at all. With responsibility comes
 trust and an acceptance to follow the agreed upon processes.
 
 I agree with this.  If we can't trust someone on *-core to follow the
 stable criteria, then they shouldn't be on *-core in the first place.
 Further, if we can't trust the combination of *two* people from *-core
 to approve a stable backport, then we're really in trouble.

There are a few different facets on this issue.

The first facet is a community aspect. Stable branch maintenance is a
task separate from upstream development, ideally performed by the people
that have a direct interest in having good, maintained stable branches
(downstream distribution packagers). Now, if *all PTLs* are fine with
adding stable branch maintenance to the tasks of their core reviewers
(in addition to specs and master branch review), then I guess we can
abandon that concept of separate tasks. But I wasn't under the
impression the core population was looking forward to add extra duties
to their current workload.

The second facet is the opt-in nature of the job. Yes, core reviewers
are trusted with acceptation of patches on the master branch and
probably have the capacity to evaluate stable branch patches as well.
But stable branches have different rules, and most current core
reviewers don't know them. There have been numerous cases where an
inappropriate patch was pushed to stable/* by a core developer, who was
very insistent on having his bugfix merged and rejected the rules we
were opposing to them. I'm fine with adding core reviewers which have
read and agreed to follow the stable rules, but adding them all by
default sounds more like a convenient way to push your own patches in
than a way to solve the stable manpower issue.

The third facet is procedural: we currently have a single stable-maint
team for all integrated projects. The original idea is that the stable
branch maintainers do not judge the patch itself (which was judged by
-core people when the patch landed in master), they just judge if it's
appropriate for stable backport (how disruptive it is, if it's feature-y
or if it adds a new config option, or if it changes default behavior).
You don't need that much to be a domain expert to judge that, and if
unsure we would just ask. If we add more project-specific core
reviewers, we should probably switch to per-project stable-maint groups.
I'm not opposed to that, just saying that's an infra config change we
need to push.

I'm generally open to changing that, since I reckon we have a manpower
issue on the stable maint side. I just want to make sure everyone knows
what they are signing up for here. We would do it for all the projects,
we can't special-case Nova.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-07-30 Thread Ken Giusti
Greetings,

Apologies for the cross-post: this should be of interest to both infra
and olso.messaging developers.

The blueprint [0] that adds support for version 1.0 of the AMQP messaging
protocol is blocked due to CI test failures [1]. These failures are due
to a new package dependency this blueprint adds to oslo.messaging.

The AMQP 1.0 functionality is provided by the Apache Qpid's Proton
AMQP 1.0 toolkit.  The blueprint uses the Python bindings for this
toolkit, which are available on Pypi.  These bindings, however, include
a C extension that depends on the Proton toolkit development libraries
in order to build and install.  The lack of this toolkit is the cause
of the blueprint's current CI failures.

This toolkit is written in C, and thus requires platform-specific
libraries.

Now here's the problem: packages for Proton are not included by
default in most distro's base repositories (yet).  The Apache Qpid
team has provided packages for EPEL, and has a PPA available for
Ubuntu.  Packages for Debian are also being proposed.

I'm proposing this patch to openstack-infra/config to address the
dependency problem [2].  It adds the proton toolkit packages to the
common slave configuration.  Does this make sense?  Are there any
better alternatives?

[0] 
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
[1] https://review.openstack.org/#/c/75815/
[2] https://review.openstack.org/#/c/110431/



-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Uniform style of README file for contrib resources

2014-07-30 Thread Sergey Kraynev
On 30 July 2014 16:21, Angus Salkeld angus.salk...@rackspace.com wrote:

 On Wed, 2014-07-30 at 13:41 +0400, Sergey Kraynev wrote:
  Hello guys.
 
 
  In the last time I meet again change related with changing content of
  README for Docker resource.
 
 
 
  https://review.openstack.org/#/c/110541/

 I can ditch mine, just making a change after helping someone
 on IRC through it.


I don't mind about your change. It remembered me about current situation
with README files.




 
  https://review.openstack.org/#/c/101144/
 
 
 
  Both changes try to improve current description.
  I looked on other README files in contrib directories and met that all
  instructions try to explain
  information which is available here [1].
  This and some other points pushed me on the follow ideas:
 
 
  - Should we provide some commands like in docker's README which will
  add required path to plugin_dirs ?

 IMO no, it is really simple and this just makes it look harder than it
 is.

 
 
  - Should we have several specific interpretations of [1] or will be
  better to add reference on existing guide and
  mention that some really specific notes?
 
 
  - Should we leave empty sections? (For example, [2])
 
 
  - Should we add README for barbican resources?
 
 
  - How about one uniform template for README files?

 can't we just have one contrib/README (at least for the installing
 part)?



I wanted to suggest this way, but what about some specific keys
(keystone plugin require additional change in heat.conf).
And I correct understood that you offer to leave other information (not
installation) in own README files?



  I think that the right way to have list of allowed sections for README
  with fixed names.
  In my opinion it helps other developers and users with using all
  contrib resources, because they will know what find
  in each section.
 
 
  I suggest to use follow structure (Note: if section is empty you just
  should not use this section):
 
 
  # Title with name of resource or for what this resource will be used
   (After this title you should provide description of resource)
  ## Resources - constant name. This section will  be used if
  plugin directory contains more then one resource (F.e. rackspase
  resources)
  # Installation   - constant name. What we should do for using
  this plugin. (Possible will be enough to add link [1] instead sections
  below)
  ## Changes in configuration  - constant name. Which files and How
  we should change them. (Possible will  be enough to add link [1])
  ## Restarting services - constant name. Names of
  services, which should be restarted.
  # Examples  - constant name. Section for
  providing template examples (not full template, just definition of
  contrib resource)
  # Known issues  - constant name. All related
  issues.
  # How it works - constant name. If you want
  to tell some mechanism about this resource.
  # Notes- constant name. Section for
  information, which can not be classified for sections above.
 
  I understand, that it's just README files, but I still think that it's
  enough important for discussion. Hope on your thoughts.
 

 yeah, they are inconsistent.

 -Angus

 
 
 
  [1]
 https://wiki.openstack.org/wiki/Heat/Plugins#Installation_and_Configuration
  [2] https://github.com/openstack/heat/tree/master/contrib/rackspace
 
 
  Regards,
  Sergey.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] objects notifications

2014-07-30 Thread Jay Lau
So we need to create a decorator method for create(), save(), destroy() etc
as following?
NOTIFICATION_FIELDS = ['host', 'metadata', ...]

  @notify_on_save(NOTIFICATION_FIELDS)
  @base.remotable
  def save(context):

  @notify_on_create(NOTIFICATION_FIELDS)
  @base.remotable
  def create(context):

Or can we just make the decorator method as generic as possible as
following:

  @notify(NOTIFICATION_FIELDS)
  @base.remotable
  def save(context):

  @notify(NOTIFICATION_FIELDS)
  @base.remotable
  def create(context):

For above case, the notify() method can handle all cases including create,
delete, update etc

Comments?




2014-07-30 12:26 GMT+08:00 Dan Smith d...@danplanet.com:

  When reviewing https://review.openstack.org/#/c/107954/ it occurred to
  me that maybe we should consider having some kind of generic object
  wrapper that could do notifications for objects. Any thoughts on this?

 I think it might be good to do this in a repeatable, but perhaps not
 totally automatic way. I can see that any time instance gets changed in
 certain ways, that we'd want a notification about it. However, there are
 probably some cases that don't fit that. For example,
 instance.system_metadata is mostly private to nova I think, so I'm not
 sure we'd want to emit a notification for that. Plus, we'd probably end
 up with some serious duplication if we just do it implicitly.

 What if we provided a way to declare the fields of an object that we
 want to trigger a notification? Something like:

   NOTIFICATION_FIELDS = ['host', 'metadata', ...]

   @notify_on_save(NOTIFICATION_FIELDS)
   @base.remotable
   def save(context):
   ...

 --Dan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-07-30 Thread Daniel P. Berrange
On Wed, Jul 30, 2014 at 08:54:01AM -0400, Ken Giusti wrote:
 Greetings,
 
 Apologies for the cross-post: this should be of interest to both infra
 and olso.messaging developers.
 
 The blueprint [0] that adds support for version 1.0 of the AMQP messaging
 protocol is blocked due to CI test failures [1]. These failures are due
 to a new package dependency this blueprint adds to oslo.messaging.
 
 The AMQP 1.0 functionality is provided by the Apache Qpid's Proton
 AMQP 1.0 toolkit.  The blueprint uses the Python bindings for this
 toolkit, which are available on Pypi.  These bindings, however, include
 a C extension that depends on the Proton toolkit development libraries
 in order to build and install.  The lack of this toolkit is the cause
 of the blueprint's current CI failures.
 
 This toolkit is written in C, and thus requires platform-specific
 libraries.
 
 Now here's the problem: packages for Proton are not included by
 default in most distro's base repositories (yet).  The Apache Qpid
 team has provided packages for EPEL, and has a PPA available for
 Ubuntu.  Packages for Debian are also being proposed.
 
 I'm proposing this patch to openstack-infra/config to address the
 dependency problem [2].  It adds the proton toolkit packages to the
 common slave configuration.  Does this make sense?  Are there any
 better alternatives?

For other cases where we need more native packages, we tyically
use devstack to ensure they are installed. This is preferrable
since it works for ordinary developers as well as the CI system.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Managing change in gerrit which depends on multiple other changes in review

2014-07-30 Thread Evgeny Fedoruk
Hi Brandon, Doug

Thanks for your explanations and feedback
I think I understand now what should be done.

I made a commit today to my TLS change https://review.openstack.org/#/c/109035
And my barbican module change https://review.openstack.org/#/c/109849

It caused:
1. patch commit to New extension for version 2 of LBaaS API change 
with some addition to neutron.conf file probably because I made a fresh 
neutron clone. 
2. patch commit to  Plugin/DB additions for version 2 of LBaaS API change
 with empty line deletion in migration HEAD file
3. patch commit to  Tests for extension, db and plugin for LBaaS V2 change
with same addition to neutron.conf file

My commit caused the changes to be scheduled to gate tests
Gate tests failed because of a new alembic migration that was merged yesterday 
- 31d7f831a591, so migration timeline is broken now
Brandon, it should be fixed in order to pass the gate. When your changes will 
be fixed, I will rebase mine.

So it looks OK now, except the gate failure that should be fixed
The dependency tree is: 

https://review.openstack.org/#/c/109035 - TLS (Evg)
depends on https://review.openstack.org/#/c/109849 Barbican module(Evg)
depends on https://review.openstack.org/#/c/105610 tests (Brandon)
depends on https://review.openstack.org/#/c/105609 DB (Brandon)
depends on https://review.openstack.org/#/c/105331 extension (Brandon)

Thanks,
Evg




-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Wednesday, July 30, 2014 3:39 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Managing change in gerrit which depends 
on multiple other changes in review

Hi Evgeny and Doug,

So the thing to keep in mind is that Gerrit determines a new review by the 
change-id in the commit message.  It then determines patch sets by the commit 
hashes.  This is my understanding of it at least.  A commit's hash gets changed 
on many actions such as cherry-picks, rebases, and commit --amend.

With this in mind, this means that you can verify if your changes will not 
cause an update to the ancestor changes in a gerrit dependency chain.  Before 
you do a git review just look at your git log and commit hashes and see if the 
hash for each of those commits are the same as the latest patch sets in gerrit.

My workflow is this:
If I just need to rebase the change, I just hit the rebase button in gerrit on 
my change only.  This will cause the commit to have a new hash, thus a new 
patch set.

If I need to make a change then just doing the normal git checkout from the 
gerrit change page, and git commit --amend works fine, because I am only 
touching that commit.

If I need to make a change AND rebase there are two ways to do this:
1. Hit Rebase Button on the gerrit change page then git checkout, make change, 
git commit --amend, git review.
- The problem with this is that it creates two patch sets.
2. git checkout the gerrit change that your gerrit change is dependent on.  
Then cherry-pick your gerrit change on top of that.  This is essentially a 
rebase, and now you can make changes to the code, commit --amend and git 
review.  Gerrit will only see this commit hash changed once, so only one patch 
set.

One other thing to keep in mind is since your change is dependent on others you 
have to rely on your change's dependents to be rebased with master.  You 
shouldn't do a rebase against master until the change you are dependent on has 
been merged.  So the only time you should rebase is when gerrit shows the 
OUTDATED message on your dependency.

Hope that helps explain my methodology, which is still a work in progress.  
However, I think this is a decent methodology when dealing with a massive 
dependency chain like this.

Thanks,
Brandon


On Tue, 2014-07-29 at 16:05 +, Doug Wiegley wrote:
 Hi Evgeny,
 
 
 I’m not sure I’m doing it in the most efficient way, so I’d love to 
 hear pointers, but what I’ve been doing:
 
 
 First, to setup the dependent commit, the command is “git review –d”.
 I’ve been using this
 guide: 
 http://www.mediawiki.org/wiki/Gerrit/Advanced_usage#Create_a_dependenc
 y
 
 
 Second, when the dependent review changes, there is a ‘rebase’ button 
 on gerrit that’ll get things back in sync automatically.
 
 
 Third, if you need to change your code after rebasing from gerrit, 
 this is the only sequence I’ve tried that doesn’t result in something 
 weird (rebasing overwrites the dependent commits, silently, so I’m 
 clearly doing something wrong):
  1. Re-clone vanilla neutron
  2. Cd into new clone, setup for gerrit review
  3. Redo dependent commit setup
  4. Create your topic branch
  5. Cherry-pick your commit from gerrit into your new topic branch
  6. Use git log -n5 --decorate --pretty=oneline”, and verify that
 your dependency commit hashes match what’s in gerrit.
  7. Git review
 
 
 Thanks,
 doug
 
 
 
 
 From: Evgeny Fedoruk evge...@radware.com
 

[openstack-dev] [oslo][nova] can't rebuild local tox due to oslo alpha packages

2014-07-30 Thread Matt Riedemann
I noticed yesterday that trying to rebuild tox in nova fails because it 
won't pull down the oslo alpha packages (config, messaging, rootwrap).


It looks like you need the --pre option with pip install to get these 
normally.


Also sounds like tox should already be doing --pre, but it doesn't 
appear to be with at least tox 1.6.1 in site-packages.


I'm using pip 1.5.6 which I thought was the latest.
--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann

This change:

https://review.openstack.org/#/c/105501/

Tries to pull in libvirt-python = 1.2.5 for testing.

I'm on Ubuntu Precise for development which has libvirt 0.9.8.

The latest libvirt-python appears to require libvirt = 0.9.11.

So do I have to move to Trusty?
--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Daniel P. Berrange
On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:
 This change:
 
 https://review.openstack.org/#/c/105501/
 
 Tries to pull in libvirt-python = 1.2.5 for testing.
 
 I'm on Ubuntu Precise for development which has libvirt 0.9.8.
 
 The latest libvirt-python appears to require libvirt = 0.9.11.
 
 So do I have to move to Trusty?

You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Sean Dague
On 07/30/2014 02:21 AM, Daniel P. Berrange wrote:
 On Wed, Jul 30, 2014 at 11:16:00AM +0200, Ihar Hrachyshka wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 29/07/14 18:12, Daniel P. Berrange wrote:
 On Tue, Jul 29, 2014 at 08:30:09AM -0700, Jay Pipes wrote:
 On 07/29/2014 06:13 AM, Daniel P. Berrange wrote:
 On Tue, Jul 29, 2014 at 02:04:42PM +0200, Thierry Carrez
 wrote:
 Ihar Hrachyshka a écrit : At the dawn of time there were no
 OpenStack stable branches, each distribution was maintaining
 its own stable branches, duplicating the backporting work. At
 some point it was suggested (mostly by RedHat and Canonical
 folks) that there should be collaboration around that task, 
 and the OpenStack project decided to set up official stable
 branches where all distributions could share the backporting
 work. The stable team group was seeded with package
 maintainers from all over the distro world.

 So these branches originally only exist as a convenient place
 to collaborate on backporting work. This is completely
 separate from development work, even if those days backports
 are often proposed by developers themselves. The stable
 branch team is separate from the rest of OpenStack teams. We
 have always been very clear tht if the stable branches are no
 longer maintained (i.e. if the distributions don't see the
 value of those anymore), then we'll consider removing them.
 We, as a project, only signed up to support those as long as
 the distros wanted them.

 We have been adding new members to the stable branch teams
 recently, but those tend to come from development teams
 rather than downstream distributions, and that starts to bend
 the original landscape. Basically, the stable branch needs to
 be very conservative to be a source of safe updates --
 downstream distributions understand the need to weigh the
 benefit of the patch vs. the disruption it may cause. 
 Developers have another type of incentive, which is to get
 the fix they worked on into stable releases, without
 necessarily being very conservative. Adding more -core people
 to the stable team to compensate the absence of distro
 maintainers will ultimately kill those branches.

 The situation I'm seeing is that the broader community believe
 that the Nova core team is responsible for the nova stable
 branches. When stuff sits in review for ages it is the core
 team that is getting pinged about it and on the receiving end
 of the complaints the inaction of review.

 Adding more people to the stable team won't kill those
 branches. I'm not suggesting we change the criteria for
 accepting patches, or that we dramatically increase the number
 of patches we accept. There is clearly alot of stuff proposed
 to stable that the existing stable team think is a good idea -
 as illustrated by the number of patches with at least one +2
 present. On the contrary, having a bigger stable team comprises
 all of core + interested distro maintainers will ensure that
 the stable branches are actually gettting the patches people
 in the field need to provide a stable cloud.

 -1

 In my experience, the distro maintainers who pioneered the stable
 branch teams had opposite viewpoints to core teams in regards to
 what was appropriate to put into a stable release. I think it's
 dangerous to populate the stable team with the core team members
 just because of long review and merge times.

 Sure there was some debate about what criteria were desired
 acceptance when stable trees were started. Once the criteria are
 defined I don't think it is credible to say that people are
 incapable of following the rules. In the unlikely event that people
 were to willfully ignore the agreed upon rules for stable tree,
 then I'd not trust them to be part of a core team working on any
 branch at all. With responsibility comes trust and an acceptance to
 follow the agreed upon processes.

 Still, it's quite common to see patches with no cherry-pick info, or
 Conflicts section removed, or incorrect Change-Id being approved and
 pushed. It's also common for people to review backports as if they are
 sent against master (with all nit picking, unit test changes
 requested, no consideration for stable branch applicability...)
 
 As said before, if people reviewing the code consistently fail to
 follow the agreed review guidelines, then point out what they're
 doing wrong and if they fail to improve, remove their +2 privileges.
 There's nothing unique about stable branches in this regard. If
 people reviewing master consistently did the wrong thing they'd
 have their +2 removed too.

Honestly, I think if anyone wants to be added to stable-maint, they
should raise their hand and join the team. I'm sure they would be welcomed.

I don't feel that stable maintenance is what we actually asked the core
teams to do as a community.

The risks are much higher for people approving stable backports, so it
should be something people explicitly get in the mindset to do.

I think it's also 

[openstack-dev] tempest api volume test failed

2014-07-30 Thread Nikesh Kumar Mahalka
I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.

When i am running tempest api volume test, i am getting some tests failed.

Below are steps for devstack deployment:
1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
#FLAT_INTERFACE = eth0
FIXED_RANGE=192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh


Now,I am running tempest test
cd /opt/stack/tempest
./run_tempest.sh tempest.api.volume


Below is portion of failed test :
Traceback (most recent call last):
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 157, in test_volume_create_get_update_delete_as_clone
origin = self.create_volume()
  File /opt/stack/tempest/tempest/api/volume/base.py, line 103, in
create_volume
cls.volumes_client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
VolumeBuildErrorException: Volume 4c195bdd-5fea-4da5-884e-69a2026d9ca0
failed to build and is in ERROR status


==
FAIL: 
tempest.api.volume.test_volumes_get.VolumesV1GetTest.test_volume_create_get_update_delete_from_image[gate,image,smoke]
--
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-07-30 18:42:49,462 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 POST http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes
0.300s
2014-07-30 18:42:49,545 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 GET 
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.082s
2014-07-30 18:42:50,626 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 GET 
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.079s
2014-07-30 18:42:50,698 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:_run_cleanups): 202 DELETE
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.069s
2014-07-30 18:42:50,734 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:_run_cleanups): 404 GET
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.035s
}}}

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 128, in wrapper
return f(self, *func_args, **func_kwargs)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 153, in test_volume_create_get_update_delete_from_image
self._volume_create_get_update_delete(imageRef=CONF.compute.image_ref)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 63, in _volume_create_get_update_delete
self.client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
VolumeBuildErrorException: Volume 6e6585a9-6f7b-42c0-b099-ec72c13a4040
failed to build and is in ERROR status


==
FAIL: setUpClass (tempest.api.volume.test_volumes_list.VolumesV1ListTestJSON)
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 76, in decorator
f(cls)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_list.py,
line 68, in setUpClass
volume = cls.create_volume(metadata=cls.metadata)
  File /opt/stack/tempest/tempest/api/volume/base.py, line 103, in
create_volume
cls.volumes_client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise 

Re: [openstack-dev] [oslo][nova] can't rebuild local tox due to oslo alpha packages

2014-07-30 Thread gordon chung
 I noticed yesterday that trying to rebuild tox in nova fails because it 
 won't pull down the oslo alpha packages (config, messaging, rootwrap).
i ran into this yesterday as well. Doug suggested i update my virtualenv and 
that worked. i went from 1.10.1 to 1.11.x

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tempest api volume test failed

2014-07-30 Thread Sean Dague
On 07/30/2014 06:51 AM, Nikesh Kumar Mahalka wrote:
 I deployed a single node devstack on Ubuntu 14.04.
 This devstack belongs to Juno.
 
 When i am running tempest api volume test, i am getting some tests failed.
 
 Below are steps for devstack deployment:
 1) git clone https://github.com/openstack-dev/devstack.git
 2)cd devstack
 3)vi local.conf
 
 [[local|localrc]]
 
 ADMIN_PASSWORD=some_password
 DATABASE_PASSWORD=$ADMIN_PASSWORD
 RABBIT_PASSWORD=$ADMIN_PASSWORD
 SERVICE_PASSWORD=$ADMIN_PASSWORD
 SERVICE_TOKEN=ADMIN
 #FLAT_INTERFACE = eth0
 FIXED_RANGE=192.168.2.80/29

This is a very small fixed range, it also does not include the address
of the host itself, which means this very well could be network related.
Look at your n-net logs and your cinder logs and there should be some
stack traces pointing you in the right direction.

 #FLOATING_RANGE=192.168.20.0/25
 HOST_IP=192.168.2.64
 LOGFILE=$DEST/logs/stack.sh.log
 SCREEN_LOGDIR=$DEST/logs/screen
 SYSLOG=True
 SYSLOG_HOST=$HOST_IP
 SYSLOG_PORT=516
 RECLONE=yes
 CINDER_ENABLED_BACKENDS=client:client_driver
 
 [[post-config|$CINDER_CONF]]
 
 [client_driver]
 volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
 san_ip = 192.168.2.192
 san_login = some_name
 san_password =some_password
 client_iscsi_ips = 192.168.2.193
 
 4)./stack.sh
 
 
 Now,I am running tempest test
 cd /opt/stack/tempest
 ./run_tempest.sh tempest.api.volume
 
 
 Below is portion of failed test :
 Traceback (most recent call last):
   File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
 line 157, in test_volume_create_get_update_delete_as_clone
 origin = self.create_volume()
   File /opt/stack/tempest/tempest/api/volume/base.py, line 103, in
 create_volume
 cls.volumes_client.wait_for_volume_status(volume['id'], 'available')
   File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
 line 162, in wait_for_volume_status
 raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
 VolumeBuildErrorException: Volume 4c195bdd-5fea-4da5-884e-69a2026d9ca0
 failed to build and is in ERROR status
 
 
 ==
 FAIL: 
 tempest.api.volume.test_volumes_get.VolumesV1GetTest.test_volume_create_get_update_delete_from_image[gate,image,smoke]
 --
 Traceback (most recent call last):
 _StringException: Empty attachments:
   stderr
   stdout
 
 pythonlogging:'': {{{
 2014-07-30 18:42:49,462 16328 INFO [tempest.common.rest_client]
 Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
 200 POST http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes
 0.300s
 2014-07-30 18:42:49,545 16328 INFO [tempest.common.rest_client]
 Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
 200 GET 
 http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
 0.082s
 2014-07-30 18:42:50,626 16328 INFO [tempest.common.rest_client]
 Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
 200 GET 
 http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
 0.079s
 2014-07-30 18:42:50,698 16328 INFO [tempest.common.rest_client]
 Request (VolumesV1GetTest:_run_cleanups): 202 DELETE
 http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
 0.069s
 2014-07-30 18:42:50,734 16328 INFO [tempest.common.rest_client]
 Request (VolumesV1GetTest:_run_cleanups): 404 GET
 http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
 0.035s
 }}}
 
 Traceback (most recent call last):
   File /opt/stack/tempest/tempest/test.py, line 128, in wrapper
 return f(self, *func_args, **func_kwargs)
   File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
 line 153, in test_volume_create_get_update_delete_from_image
 self._volume_create_get_update_delete(imageRef=CONF.compute.image_ref)
   File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
 line 63, in _volume_create_get_update_delete
 self.client.wait_for_volume_status(volume['id'], 'available')
   File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
 line 162, in wait_for_volume_status
 raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
 VolumeBuildErrorException: Volume 6e6585a9-6f7b-42c0-b099-ec72c13a4040
 failed to build and is in ERROR status
 
 
 ==
 FAIL: setUpClass (tempest.api.volume.test_volumes_list.VolumesV1ListTestJSON)
 --
 Traceback (most recent call last):
 _StringException: Traceback (most recent call last):
   File /opt/stack/tempest/tempest/test.py, line 76, in decorator
 f(cls)
   

Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Kyle Mestery
On Wed, Jul 30, 2014 at 7:52 AM, Thierry Carrez thie...@openstack.org wrote:
 Russell Bryant wrote:
 On 07/29/2014 12:12 PM, Daniel P. Berrange wrote:
 Sure there was some debate about what criteria were desired acceptance
 when stable trees were started. Once the criteria are defined I don't
 think it is credible to say that people are incapable of following the
 rules. In the unlikely event that people were to willfully ignore the
 agreed upon rules for stable tree, then I'd not trust them to be part
 of a core team working on any branch at all. With responsibility comes
 trust and an acceptance to follow the agreed upon processes.

 I agree with this.  If we can't trust someone on *-core to follow the
 stable criteria, then they shouldn't be on *-core in the first place.
 Further, if we can't trust the combination of *two* people from *-core
 to approve a stable backport, then we're really in trouble.

 There are a few different facets on this issue.

 The first facet is a community aspect. Stable branch maintenance is a
 task separate from upstream development, ideally performed by the people
 that have a direct interest in having good, maintained stable branches
 (downstream distribution packagers). Now, if *all PTLs* are fine with
 adding stable branch maintenance to the tasks of their core reviewers
 (in addition to specs and master branch review), then I guess we can
 abandon that concept of separate tasks. But I wasn't under the
 impression the core population was looking forward to add extra duties
 to their current workload.

Speaking as a PTL, I don't think adding all core team members for
projects as stable maintainers would be a good thing. As pointed out
already in this thread, the task of code reviews for stable is much
different than that of code review for master. Having a more focused
group of stable reviewers is likely a better solution than adding a
large group of reviewers focused on upstream work. Also, as Sean
pointed out in a followup email here, we have a process for people to
propose themselves to stable maintenance [1]. Perhaps we need to
publicize that a bit more to get more people who can actively help in
the stable reviews to join the team.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/StableBranch#Joining_the_Team

 The second facet is the opt-in nature of the job. Yes, core reviewers
 are trusted with acceptation of patches on the master branch and
 probably have the capacity to evaluate stable branch patches as well.
 But stable branches have different rules, and most current core
 reviewers don't know them. There have been numerous cases where an
 inappropriate patch was pushed to stable/* by a core developer, who was
 very insistent on having his bugfix merged and rejected the rules we
 were opposing to them. I'm fine with adding core reviewers which have
 read and agreed to follow the stable rules, but adding them all by
 default sounds more like a convenient way to push your own patches in
 than a way to solve the stable manpower issue.

 The third facet is procedural: we currently have a single stable-maint
 team for all integrated projects. The original idea is that the stable
 branch maintainers do not judge the patch itself (which was judged by
 -core people when the patch landed in master), they just judge if it's
 appropriate for stable backport (how disruptive it is, if it's feature-y
 or if it adds a new config option, or if it changes default behavior).
 You don't need that much to be a domain expert to judge that, and if
 unsure we would just ask. If we add more project-specific core
 reviewers, we should probably switch to per-project stable-maint groups.
 I'm not opposed to that, just saying that's an infra config change we
 need to push.

 I'm generally open to changing that, since I reckon we have a manpower
 issue on the stable maint side. I just want to make sure everyone knows
 what they are signing up for here. We would do it for all the projects,
 we can't special-case Nova.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] nova-network as ML2 mechanism?

2014-07-30 Thread Kyle Mestery
On Tue, Jul 29, 2014 at 10:12 AM, Jonathan Proulx j...@jonproulx.com wrote:
 Hi All,

 Would making an nova-network mechanism driver for the ml2 plugin be possible?

This has been discussed a bit, and yes, in theory this is possible.
Nachi has started looking into this as far as I know. I also think
this may have come up during the nova mid-cycle which is going on this
week.

 I'm an operator not a developer so apologies if this has been
 discussed and is either planned or impossible, but a quick web search
 didn't hit anything.

 As an operator I would envision this a a transition mechanism, which
 AFAIK is still lacking, between nova network and neutron.

 If a DB transition scrip similar to the ovs-ml2 conversion could be
 created, operator could transition their controller/network-nodes to
 neutron while initially leaving the compute nodes with active
 nova-network configs active.  It's a much simpler matter for most
 operators I think to then do rolling upgrades of compute hosts to
 proper neutron agents either by live migrating existing VMs or simply
 through attrition.  And this would preserve continuity of VMs through
 the upgrade (these may be cattle but you still don't want to slaughter
 all  of them at once!)

 This is no longer my use case as I jumped into neutron with Grizzly,
 but having just transitioned to Icehouse and ML2, it got me to
 thinking.  If this sounds feasible from a development standpoint I'd
 recommend taking the discussion to the operators list to see if others
 share my opinion before doing major work in that direction.

This is a really good idea actually, thanks for sharing this. One
place where I feel like we haven't had enough input in the
nova-network/neutron parity discussion is around input from operators.

Thanks,
Kyle

 Just a thought,
 -Jon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [nova] Parity meeting cancelled this week

2014-07-30 Thread Kyle Mestery
Given the fact most of the key players are at the nova mid-cycle this
week, lets cancel the parity meeting [1] for today. Next week, lets
focus on what came out of the nova mid-cycle and a plan for the final
weeks of Juno.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/NeutronNovaNetworkParity

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Generate Event or Notification in Ceilometer

2014-07-30 Thread Jay Pipes

On 07/30/2014 12:12 AM, Duan, Li-Gong (Gary@HPServers-Core-OE-PSC) wrote:

Hi Jay,

Thanks for your comment. You suggestion is good but I am wondering why
we cannot use or leverage Ceilometer to monitor infrastructure-related,
as it can used to monitor tenant-related things.


You *could* use Ceilometer for this, sure. But I just don't recommend 
it. For performance reasons.


Best,
-jay


On 07/29/2014 02:05 AM, Duan, Li-Gong (Gary at HPServers-Core-OE-PSC  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev) wrote:


Hi Folks,







Are there any guide or examples to show how to produce a new event or



notification add add a handler for this event in ceilometer?







I am asked to implement OpenStack service monitoring which will send an



event and trigger the handler once a service, say nova-compute, crashes,



in a short time. L







The link (http://docs.openstack.org/developer/ceilometer/events.html)



does a good job on the explanation of concept and hence I know that I



need to emit notification to message queue and ceilometer-collector will



process them and generate events but it is far from real implementations.




I would not use Ceilometer for this, as it is more tenant-facing than

infrastructure service facing. Instead, I would use a tried-and-true

solution like Nagios and NRPE checks. Here's an example of such a check

for a keystone endpoint:



https://github.com/ghantoos/debian-nagios-plugins-openstack/blob/master/plugins/check_keystone



Best,

-jay

*From:* Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
*Sent:* Tuesday, July 29, 2014 5:05 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* [Ceilometer] Generate Event or Notification in Ceilometer

Hi Folks,

Are there any guide or examples to show how to produce a new event or
notification add add a handler for this event in ceilometer?

I am asked to implement OpenStack service monitoring which will send an
event and trigger the handler once a service, say nova-compute, crashes,
in a short time. L

The link (http://docs.openstack.org/developer/ceilometer/events.html)
does a good job on the explanation of concept and hence I know that I
need to emit notification to message queue and ceilometer-collector will
process them and generate events but it is far from real implementations.

Regards,

Gary


r

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Generate Event or Notification in Ceilometer

2014-07-30 Thread Sandy Walsh
If all you want to do is publish a notification you can use oslo.messaging 
directly. Or, for something lighter weight, we have Notabene, which is a small 
wrapper on Kombu.

An example of how our notification simulator/generator uses it is available 
here:
https://github.com/StackTach/notigen/blob/master/bin/event_pump.py

Of course, you'll have to ensure you fabricate a proper event payload.

Hope it helps
-S


From: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC) [li-gong.d...@hp.com]
Sent: Tuesday, July 29, 2014 6:05 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ceilometer] Generate Event or Notification in 
Ceilometer

Hi Folks,

Are there any guide or examples to show how to produce a new event or 
notification add add a handler for this event in ceilometer?

I am asked to implement OpenStack service monitoring which will send an event 
and trigger the handler once a service, say nova-compute, crashes, in a short 
time. :(
The link (http://docs.openstack.org/developer/ceilometer/events.html) does a 
good job on the explanation of concept and hence I know that I need to emit 
notification to message queue and ceilometer-collector will process them and 
generate events but it is far from real implementations.

Regards,
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? July 30 2014

2014-07-30 Thread Anne Gentle
Hi all,
We had the APAC docs team meeting and here are the minutes and logs:
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-07-30-03.01.html

http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-07-30-03.01.log.html

Lots going on in doclandia so let's get started.

__In review and merged this past week__

The Architecture Design Guide is merrily on its way to print-ready. We're
fixing nits, making sure images are readable, and generally fretting as we
ought before cementing it on the printed page. Once it's ready in the
master branch, I'll make a PDFA-1a and upload to Lulu with the cover and
it'll be ready to buy! My goal is to finish this week.

Nearly 200 doc patches merged in the past week, wowsa. Looking for themes,
the training guides had a big push and the cleanup on the Arch Design Guide
accounted for quite a few patches. The Identity API v2.0 continues to be
buggy and we're fixing and breaking as we go. Lots of discussion around the
maintenance difficulties with the API Reference Guides on the mailing
list [1] and I'd love more input on the cost/benefit for those particular
guides.  [2]

I'm also pleased to see at least three new doc contributors with patches,
thanks to our newcomers!

__High priority doc work__

We are gearing up for two focused efforts on Networking/neutron docs. One
is a doc swarm in Australia; the other is a doc day after the Ops meetup in
San Antonio Texas. Here are the details:

 * Brisbrane Australia August 9th: http://openstack-swarm.rhcloud.com/
Contact Lana Brindley with any questions.
 * San Antonio August 27th:
https://etherpad.openstack.org/p/neutron-docs-juno Contact Edgar Magana
with any questions.

__Ongoing doc work__

The Heat Orchestration Template (HOT) Guide is underway with the spec
nearly there: https://review.openstack.org/#/c/108133/
and the initial import in review: https://review.openstack.org/#/c/108245/

The review of gap coverage at this week's TC meeting shows a lot of team's
efforts on bridging the gaps in their documentation:
- The ceilometer team is working on the user and operator documentation.
- The trove team is pounding through their backlog on docs, great efforts. h
ttps://etherpad.openstack.org/p/trove-doc-items
- The neutron team and Edgar are gearing up for the August work.
https://etherpad.openstack.org/p/neutron-docs-juno
- The horizon team has a start on a contributor pattern guide for the
Dashboard. [3]

__New incoming doc requests__

We really need to stay on top of the incoming doc bugs.Use
https://wiki.openstack.org/wiki/BugTriage to triage incoming bugs for API
docs or ops/user docs. Thanks Tom for the reminder!

__Doc tools updates__

We've got epub work in progress at https://review.openstack.org/#/c/107825/

__Other doc news__

Be on the lookout for a possible change to the Identity program due to
changing the scope of the mission to Authorization, Auditing, and
Authentication: https://review.openstack.org/#/c/108739


1. http://lists.openstack.org/pipermail/openstack-docs/2014-July/004920.html
2. http://docs.openstack.org/api/api-ref-guides.html
3.
https://docs.google.com/presentation/d/1OKy_oXZQSg8Feo0p6Es7giR6a-w_CK8H03D2R2yAUjs/edit?usp=sharing
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Barbican] Keystone PKI token too much long

2014-07-30 Thread Dolph Mathews
We recently merged an implementation for GET /v3/catalog which finally
enables POST /v3/auth/tokens?nocatalog to be a reasonable default behavior,
at the cost of an extra HTTP call from remote service back to keystone
where necessary.

Spec:
https://github.com/openstack/keystone-specs/blob/master/specs/juno/get-catalog.rst
Blueprint: https://blueprints.launchpad.net/keystone/+spec/get-catalog
API change:
https://review.openstack.org/#/c/106854/1/v3/src/markdown/identity-api-v3.md
Implementation: https://review.openstack.org/#/c/106893/

I also filed wishlist bug 135 (UUID is a more friendly default token
provider than PKI) recently, based on the developer community's recent
discussions, which Morgan has subsequently raised to the the mailing list
with a survey.

Bug: https://bugs.launchpad.net/keystone/+bug/135
Corresponding change of defaults: https://review.openstack.org/#/c/110488/
Survey on the mailing list:
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041474.html

On Tue, Jul 22, 2014 at 4:04 AM, Giuseppe Galeota giuseppegale...@gmail.com
 wrote:


 ​Dear all,
 have you some good news about the problem related to
 ​the 
 Keystone PKI token too much long​ for Barbican?

 Thank you,
 Giuseppe



 2014-01-31 14:27 GMT+01:00 Ferreira, Rafael r...@io.com:

  By the way, you can achieve the same benefits of uuid tokens (shorter
 tokens) with PKI by simply using a md5 hash of the PKI token for your
 X-Auth headers. This is poorly documented but it seems to work just fine.

   From: Adam Young ayo...@redhat.com
 Date: Tuesday, January 28, 2014 at 1:41 PM
 To: openst...@lists.openstack.org openst...@lists.openstack.org
 Subject: Re: [Openstack] [Barbican] Keystone PKI token too much long

   On 01/22/2014 12:21 PM, John Wood wrote:

  (Adding another member of our team Douglas)

  Hello Giuseppe,

  For questions about news or patches for Keystone's PKI vs UUID modes,
 you might reach out to the openstack-dev@lists.openstack.org mailing
 list, with the subject line prefixed with [openstack-dev] [keystone]

  Our observation has been that the PKI mode can generate large text
 blocks for tokens (esp. for large service catalogs) that cause http header
 errors.

  Regarding the specific barbican scripts you are running, we haven't run
 those in a while, so I'll investigate as we might need to update them.
 Please email back your /etc/barbican/barbican-api-paste.ini paste config
 file when you have a chance as well.

  Thanks,
 John


  --
 *From:* Giuseppe Galeota [giuseppegale...@gmail.com]
 *Sent:* Wednesday, January 22, 2014 7:36 AM
 *To:* openst...@lists.openstack.org
 *Cc:* John Wood
 *Subject:* [Openstack] [Barbican]
 ​​
 Keystone PKI token too much long

  Dear all,
 I have configured Keystone for Barbican using this guide
 https://github.com/cloudkeep/barbican/wiki/Developer-Guide-for-Keystone
 .

  Is there any news or patch about the need to use a shorter token? I
 would not use a modified token.

 Its a known problem.  You can request a token without the service catalog
 using an extension.

 One possible future enhancement is to compress the key.



  Following you can find an extract of the linked guide:

- (Optional) Typical keystone setup creates PKI tokens that are long,
do not fit easily into curl requests without splitting into components. 
 For
testing purposes suggest updating the keystone database with a shorter
token-id. (An alternative is to set up keystone to generate uuid tokens.)
From the above output grad the token expiry value, referred to as x-y-z

  mysql -u rootuse keystone;update token set id=foo where expires=x-y-z ;


  Thank you,
 Giuseppe


 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


  The communication contained in this e-mail is confidential and is
 intended only for the named recipient(s) and may contain information that
 is privileged, proprietary, attorney work product or exempt from disclosure
 under applicable law. If you have received this message in error, or are
 not the named recipient(s), please note that any form of distribution,
 copying or use of this communication or the information in it is strictly
 prohibited and may be unlawful. Please immediately notify the sender of the
 error, and delete this communication including any attached files from your
 system. Thank you for your cooperation.

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [ceilometer] [swift] Improving ceilometer.objectstore.swift_middleware

2014-07-30 Thread Chris Dent


ceilometer/objectstore/swift_middleware.py[1] counts the size of web
request and reponse bodies through the swift proxy server and publishes
metrics of the size of the request and response and that a request
happened at all.

There are (at least) two bug reports associated with this bit of code:

* avoid requirement on tarball for unit tests
  https://bugs.launchpad.net/ceilometer/+bug/1285388

* significant performance degradation when ceilometer middleware for
  swift proxy uses
  https://bugs.launchpad.net/ceilometer/+bug/1337761

On the first bug the goal is to remove the dependency on swift from
ceilometer. This is halfway done but there are barriers[2] with
regard to the apparently unique way that swift does logging and the
fact that InputProxy and split_path live in swift rather than some
communal location. The barriers may be surmountable but if other
things in the same context are changing, it might not be necessary.

On the second bug, while the majority of the performance cost is in
the call to rpc_server.cast(), achieving maximum performance would
probably come from doing the counts and notifications _not_ in
middlewhere. The final application in the WSGI stack will know the
size of requests and responses without needing to sometime recalculate.
May as well use that.

These two situations overlap in a few ways that suggest we could make
some improvements. I'm after input from both the swift crew and the
ceilometer crew to see if we can reach something that is good for the
long term rather than short term fixes to these bugs.

Some options appear to be:

* Move the middleware to swift or move the functionality to swift.

  In the process make the functionality drop generic notifications for
  storage.objects.incoming.bytes and storage.objects.outgoing.bytes
  that anyone can consume, including ceilometer.

  This could potentially address both bugs.

* Move or copy swift.common.utils.{InputProxy,split_path} to somewhere
  in oslo, but keep the middleware in ceilometer.

  This would require somebody sharing the info on how to properly
  participate in swift's logging setup without incorporating swift.

  This would fix the first bug without saying anything about the
  second.

* Carry on importing the swift tarball or otherwise depending on
  swift.

  Fixes neither bug, maintains status quo.

What are other options? Of those above which are best or most
realistic?

Personally I'm a fan of the first option: move the functionality into
swift and take it out of middleware. This gets the maximum win for
performance and future flexibility (of consumers).

[1]
https://github.com/openstack/ceilometer/blob/master/ceilometer/objectstore/swift_middleware.py

[2] https://review.openstack.org/#/c/110302/

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Datastore/Versions API improvements

2014-07-30 Thread Denis Makogon
Hello, Stackers.



I’d like to gather Trove team around question related to
Datastores/Version API responses (request/response payloads and HTTP codes).

Small INFO

When deployer creates datastore and versions for it Troves` backend
receives request to store DBDatastore and DBDatastoreVersion objects with
certain parameters. The most interesting attribute of DBDatastoreVersion is
“packages” - it’s being stored as String object (and it’s totally fine).
But when we’re trying to query given datastore version through the
Datastores API attribute “packages” is being returned as String object too.
And it seems that it breaks response pattern - “If given attribute
represents complex attribute, such as: list, dict, tuple - it should be
returned as is.

So, the first question is - are we able to change it in terms of V1?

The second question is about admin_context decorator (see [1]). This method
executes methods of given controller and verifies that user is able to
execute certain procedure.

Taking into account RFC 2616 this method should raise HTTP Forbidden
(code 403) if user tried to execute request that he’s not allowed to.

But given method return HTTP Unauthorized (code 401) which seems weird
since user is authorized.

This is definitely a bug. And it comes from [2].


[1]
https://github.com/openstack/trove/blob/master/trove/common/auth.py#L72-L87

[2]
https://github.com/openstack/trove/blob/master/trove/common/wsgi.py#L316-L318



Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Oslo.messaging] RPC failover handling in rabbitmq driver

2014-07-30 Thread Bogdan Dobrelya
On 07/28/2014 11:20 AM, Bogdan Dobrelya wrote:
 Hello.
 I'd like to bring your attention to major RPC failover issue in
 impl_rabbit.py [0]. There are several *related* patches and a number of
 concerns should be considered as well:
 - Passive exchanges fix [1] (looks like the problem is much deeper than
 it seems though).
 - the first version of the fix [2] which makes the producer to declare a
 queue and bind it to exchange as well as consumer does.
 - Making all RPC involved reply_* queues durable in order to preserve
 them in RabbitMQ after failover (there could be a TTL for such a queues
 as well)
 - RPC throughput tuning patch [3]
 
 I believe the issue [0] should be at least prioritized and assigned to
 some milestone.
 
 [0] https://bugs.launchpad.net/oslo.messaging/+bug/1338732
 [1] https://review.openstack.org/#/c/109373/
 [2]
 https://github.com/noelbk/oslo.messaging/commit/960fc26ff050ca3073ad90eccbef1ca95712e82e
 [3] https://review.openstack.org/#/c/109143/
 

There is a small update for this RabbitMQ RPC failover research:
Stan Lagun submitted the patch [0] for related bug [1].
Please don't hesitate to join the review process.

Basically the idea of the patch is to address the step 3
(rabbit dies and restarts) for *mirrored rabbit clusters*.
Obviously, it changes nothing for single rabbit host case because we
cannot failover then we have no cluster.

I agree the issue is more common than just impl_rabbit, but at least we
could start addressing it from here.

Speaking in general, it looks like RPC should be standardized more
thoroughly, may be as a some new RFC, and it should provide a rules
  a) how to handle AMQP connection HA failovers at RPC layer both for
drivers and applications, both for client and server side (speaking in
terms of RPC)
  b) how to handle RPC retries in a single AMQP host configurations and
in HA as well.
That would also have allowed amqp driver developers to borrow some logic
from app layers, if needed (and vice versa for app developers) w/o
causing a havoc and sorrow as we have now in oslo.messaging :-)

[0] https://review.openstack.org/110058
[1] https://bugs.launchpad.net/oslo.messaging/+bug/1349301

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Specs improvements. Review request.

2014-07-30 Thread Denis Makogon
Hello, Stackers.


I’ve been working on several specs and i’d like to receive early
feedbacks on updated specs for:


   -

   Database log manipulations. Describes initial feature description.
   -

  https://wiki.openstack.org/wiki/Trove/DBInstanceLogOperation
  -

   Events notifications. Ceilometer integration.
   -

  https://wiki.openstack.org/wiki/Trove/ceilometer_integration
  -

   Datastores/Versions Management APIs.
   -

  https://wiki.openstack.org/wiki/Trove/DatastoreManagementAPI


Folks, if you don’t mind, i’d like to receive feedbacks due to the end
of the Friday to submit them for monday blueprint review.


Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Not support dnsmasq 2.63?

2014-07-30 Thread Mark McClain
The hard limit should be 2.63 since that is supported in all of the modern long 
term releases from the distros.  I’d prefer we not exit processes because we’ve 
been removing active checks on process starts.

mark




On Jul 29, 2014, at 9:51 PM, Xuhan Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

We bumped the minimum version of dnsmasq to 2.63 a while ago by this code 
change:

https://review.openstack.org/#/c/105378/

However, currently we still kind of support earlier version of dnsmasq 
because we only give a warning and don't exit the program when we find dnsmasq 
version is less than the minimum version. This causes some confusion and 
complicates the code since we need to take care different syntax of dnsmasq of 
different version in dhcp code (Note that the previous version doesn't support 
tag).

I wonder what's your opinion on NOT supporting dnsmasq version less than 2.63 
in Juno? I think we can prompt error message and exit the program when we 
detect invalid version but I would like to gather more thoughts on this one.

Thanks,
Xu Han
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Elena Ezhova
Hello everyone!

Some recent change requests ([1], [2]) show that there is a number of
issues with locking db resources in Neutron.

One of them is initialization of drivers which can be performed
simultaneously by several neutron servers. In this case locking is
essential for avoiding conflicts which is now mostly done via using
SQLAlchemy's
with_lockmode() method, which emits SELECT..FOR UPDATE resulting in rows
being locked within a transaction. As it has been already stated by Mike
Bayer [3], this statement is not supported by Galera and, what’s more, by
Postgresql for which a lock doesn’t work in case when a table is empty.

That is why there is a need for an easy solution that would allow
cross-server locking and would work for every backend. First thing that
comes into mind is to create a table which would contain all locks acquired
by various pieces of code. Each time a code, that wishes to access a table
that needs locking, would have to perform the following steps:

1. Check whether a lock is already acquired by using SELECT lock_name FROM
cross_server_locks table.

2. If SELECT returned None, acquire a lock by inserting it into the
cross_server_locks table.

   In other case wait and then try again until a timeout is reached.

3. After a code has executed it should release the lock by deleting the
corresponding entry from the cross_server_locks table.

The locking process can be implemented by decorating a function that
performs a transaction by a special function, or as a context manager.

Thus, I wanted to ask the community whether this approach deserves
consideration and, if yes, it would be necessary to decide on the format of
an entry in cross_server_locks table: how a lock_name should be formed,
whether to support different locking modes, etc.


[1] https://review.openstack.org/#/c/101982/

[2] https://review.openstack.org/#/c/107350/

[3]
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] New name for the project

2014-07-30 Thread Kurt Griffiths
Hi everyone, we have discussed a few new names for the project to avoid 
trademark issues. Previously, we had chosen “Naav” but several people weren’t 
feeling great about that name. So, we discussed this  today in 
#openstack-marconi and got consensus to rename Marconi to Zaqar. If anyone has 
any vehement objections, let me know right away, otherwise I’d like to move 
forward on the new name.

Thanks,
Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Clint Byrum
Please do not re-invent locking.. the way we reinvented locking in Heat.
;)

There are well known distributed coordination services such as Zookeeper
and etcd, and there is an abstraction for them already called tooz:

https://git.openstack.org/cgit/stackforge/tooz/

Excerpts from Elena Ezhova's message of 2014-07-30 09:09:27 -0700:
 Hello everyone!
 
 Some recent change requests ([1], [2]) show that there is a number of
 issues with locking db resources in Neutron.
 
 One of them is initialization of drivers which can be performed
 simultaneously by several neutron servers. In this case locking is
 essential for avoiding conflicts which is now mostly done via using
 SQLAlchemy's
 with_lockmode() method, which emits SELECT..FOR UPDATE resulting in rows
 being locked within a transaction. As it has been already stated by Mike
 Bayer [3], this statement is not supported by Galera and, what’s more, by
 Postgresql for which a lock doesn’t work in case when a table is empty.
 
 That is why there is a need for an easy solution that would allow
 cross-server locking and would work for every backend. First thing that
 comes into mind is to create a table which would contain all locks acquired
 by various pieces of code. Each time a code, that wishes to access a table
 that needs locking, would have to perform the following steps:
 
 1. Check whether a lock is already acquired by using SELECT lock_name FROM
 cross_server_locks table.
 
 2. If SELECT returned None, acquire a lock by inserting it into the
 cross_server_locks table.
 
In other case wait and then try again until a timeout is reached.
 
 3. After a code has executed it should release the lock by deleting the
 corresponding entry from the cross_server_locks table.
 
 The locking process can be implemented by decorating a function that
 performs a transaction by a special function, or as a context manager.
 
 Thus, I wanted to ask the community whether this approach deserves
 consideration and, if yes, it would be necessary to decide on the format of
 an entry in cross_server_locks table: how a lock_name should be formed,
 whether to support different locking modes, etc.
 
 
 [1] https://review.openstack.org/#/c/101982/
 
 [2] https://review.openstack.org/#/c/107350/
 
 [3]
 https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bp/pxe boot

2014-07-30 Thread Angelo Matarazzo

Hi folks,
I would add the pxe boot capability to Nova/libvirt and Horizon too.
Currently, compute instances must be booted from images (or snapshots) 
stored in Glance or volumes stored in Cinder.
Our idea (as you can find below) is already described there [1] [2] and 
aims to provide a design for booting compute instances from a PXE boot 
server, i.e. bypassing the image/snapshot/volume requirement.
There is already  a open blueprint but I would want to register a new 
one because it has no update since 2013.
https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe 
https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe
https://wiki.openstack.org/wiki/Nova/Blueprints/pxe-boot-instance 
https://wiki.openstack.org/wiki/Nova/Blueprints/pxe-boot-instance

What do you think?

Thanks beforehand

Angelo

--
Angelo Matarazzo

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)
E-mail: angelo.matara...@dektech.com.au
WEB: www.dektech.com.au

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Juno / Graduation Planning Session Today

2014-07-30 Thread Kurt Griffiths
Hi everyone, sorry for the short notice, but we are going to hold a special 
roadmap planning meeting today. Everyone is welcome to attend, but I esp. need 
core reviewers to attend:

When: 2100 UTC
Where: #openstack-marconi
Agenda: https://etherpad.openstack.org/p/marconi-scratch

Hope to see you there!

—
Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes
There's also no need to use locks at all for this (distributed or 
otherwise).


You can use a compare and update strategy with an exponential backoff 
similar to the approach taken here:


https://review.openstack.org/#/c/109837/

I'd have to look at the Neutron code, but I suspect that a simple 
strategy of issuing the UPDATE SQL statement with a WHERE condition that 
is constructed to take into account the expected current record state 
would do the trick...


Best,
-jay

On 07/30/2014 09:33 AM, Clint Byrum wrote:

Please do not re-invent locking.. the way we reinvented locking in Heat.
;)

There are well known distributed coordination services such as Zookeeper
and etcd, and there is an abstraction for them already called tooz:

https://git.openstack.org/cgit/stackforge/tooz/

Excerpts from Elena Ezhova's message of 2014-07-30 09:09:27 -0700:

Hello everyone!

Some recent change requests ([1], [2]) show that there is a number of
issues with locking db resources in Neutron.

One of them is initialization of drivers which can be performed
simultaneously by several neutron servers. In this case locking is
essential for avoiding conflicts which is now mostly done via using
SQLAlchemy's
with_lockmode() method, which emits SELECT..FOR UPDATE resulting in rows
being locked within a transaction. As it has been already stated by Mike
Bayer [3], this statement is not supported by Galera and, what’s more, by
Postgresql for which a lock doesn’t work in case when a table is empty.

That is why there is a need for an easy solution that would allow
cross-server locking and would work for every backend. First thing that
comes into mind is to create a table which would contain all locks acquired
by various pieces of code. Each time a code, that wishes to access a table
that needs locking, would have to perform the following steps:

1. Check whether a lock is already acquired by using SELECT lock_name FROM
cross_server_locks table.

2. If SELECT returned None, acquire a lock by inserting it into the
cross_server_locks table.

In other case wait and then try again until a timeout is reached.

3. After a code has executed it should release the lock by deleting the
corresponding entry from the cross_server_locks table.

The locking process can be implemented by decorating a function that
performs a transaction by a special function, or as a context manager.

Thus, I wanted to ask the community whether this approach deserves
consideration and, if yes, it would be necessary to decide on the format of
an entry in cross_server_locks table: how a lock_name should be formed,
whether to support different locking modes, etc.


[1] https://review.openstack.org/#/c/101982/

[2] https://review.openstack.org/#/c/107350/

[3]
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Doug Wiegley
 I'd have to look at the Neutron code, but I suspect that a simple
 strategy of issuing the UPDATE SQL statement with a WHERE condition that

I¹m assuming the locking is for serializing code, whereas for what you
describe above, is there some reason we wouldn¹t just use a transaction?

Thanks,
doug



On 7/30/14, 9:41 AM, Jay Pipes jaypi...@gmail.com wrote:

There's also no need to use locks at all for this (distributed or
otherwise).

You can use a compare and update strategy with an exponential backoff
similar to the approach taken here:

https://review.openstack.org/#/c/109837/

I'd have to look at the Neutron code, but I suspect that a simple
strategy of issuing the UPDATE SQL statement with a WHERE condition that
is constructed to take into account the expected current record state
would do the trick...

Best,
-jay

On 07/30/2014 09:33 AM, Clint Byrum wrote:
 Please do not re-invent locking.. the way we reinvented locking in Heat.
 ;)

 There are well known distributed coordination services such as Zookeeper
 and etcd, and there is an abstraction for them already called tooz:

 https://git.openstack.org/cgit/stackforge/tooz/

 Excerpts from Elena Ezhova's message of 2014-07-30 09:09:27 -0700:
 Hello everyone!

 Some recent change requests ([1], [2]) show that there is a number of
 issues with locking db resources in Neutron.

 One of them is initialization of drivers which can be performed
 simultaneously by several neutron servers. In this case locking is
 essential for avoiding conflicts which is now mostly done via using
 SQLAlchemy's
 with_lockmode() method, which emits SELECT..FOR UPDATE resulting in
rows
 being locked within a transaction. As it has been already stated by
Mike
 Bayer [3], this statement is not supported by Galera and, what¹s more,
by
 Postgresql for which a lock doesn¹t work in case when a table is empty.

 That is why there is a need for an easy solution that would allow
 cross-server locking and would work for every backend. First thing that
 comes into mind is to create a table which would contain all locks
acquired
 by various pieces of code. Each time a code, that wishes to access a
table
 that needs locking, would have to perform the following steps:

 1. Check whether a lock is already acquired by using SELECT lock_name
FROM
 cross_server_locks table.

 2. If SELECT returned None, acquire a lock by inserting it into the
 cross_server_locks table.

 In other case wait and then try again until a timeout is reached.

 3. After a code has executed it should release the lock by deleting the
 corresponding entry from the cross_server_locks table.

 The locking process can be implemented by decorating a function that
 performs a transaction by a special function, or as a context manager.

 Thus, I wanted to ask the community whether this approach deserves
 consideration and, if yes, it would be necessary to decide on the
format of
 an entry in cross_server_locks table: how a lock_name should be formed,
 whether to support different locking modes, etc.


 [1] https://review.openstack.org/#/c/101982/

 [2] https://review.openstack.org/#/c/107350/

 [3]
 
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Loc
king_-_SELECT_FOR_UPDATE

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes

On 07/30/2014 09:48 AM, Doug Wiegley wrote:

I'd have to look at the Neutron code, but I suspect that a simple
strategy of issuing the UPDATE SQL statement with a WHERE condition that


I¹m assuming the locking is for serializing code, whereas for what you
describe above, is there some reason we wouldn¹t just use a transaction?


Because you can't do a transaction from two different threads...

The compare and update strategy is for avoiding the use of SELECT FOR 
UPDATE.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann



On 7/30/2014 9:20 AM, Matt Riedemann wrote:



On 7/30/2014 6:43 AM, Daniel P. Berrange wrote:

On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:

This change:

https://review.openstack.org/#/c/105501/

Tries to pull in libvirt-python = 1.2.5 for testing.

I'm on Ubuntu Precise for development which has libvirt 0.9.8.

The latest libvirt-python appears to require libvirt = 0.9.11.

So do I have to move to Trusty?


You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.

Regards,
Daniel



Can we be more specific because this would also need to be updated in
the devref docs for setting up your development environment with Ubuntu.

Sorry for being a newb but I went here:

https://wiki.ubuntu.com/ServerTeam/CloudArchive/

And tried doing:

sudo add-apt-repository cloud-archive:icehouse

Which failed, I guess it doesn't know about what that means or something?

I added a repo to
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/icehouse/
manually but it's not finding any newer libvirt packages.

If I can get some help I can push a patch to update the docs since I'm
assuming I won't be the only one that hits this and it sounds like
minesweeper hit it recently too. [1]

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html



Yay for docs team, I was missing this:

apt-get install python-software-properties

Found it here:

http://docs.openstack.org/havana/install-guide/install/apt/content/basics-packages.html

The devref env setup doc in nova should still probably be updated to say 
something like, 'hey if you're on juno using precise you need to enable 
cloud archive to update libvirt'.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Clint Byrum
Excerpts from Doug Wiegley's message of 2014-07-30 09:48:17 -0700:
  I'd have to look at the Neutron code, but I suspect that a simple
  strategy of issuing the UPDATE SQL statement with a WHERE condition that
 
 I¹m assuming the locking is for serializing code, whereas for what you
 describe above, is there some reason we wouldn¹t just use a transaction?
 

I believe the code in question is doing something like this:

1) Check DB for initialized SDN controller driver
2) Not initialized - initialize the SDN controller via its API
3) Record in DB that it is initialized

Step (2) above needs serialization, not (3).

Compare and update will end up working like a distributed lock anyway,
because the db model will have to be changed to have an initializing
state, and then if initializing fails, you'll have to have a timeout.. and
stealing for stuck processes.

Sometimes a distributed lock is actually a simpler solution.

Tooz will need work, no doubt. Perhaps if we call it 'oslo.locking' it
will make more sense. Anyway, my point stands: trust the experts, avoid
reinventing locking. And if you don't like tooz, extract the locking
code from Heat and turn it into an oslo.locking library or something.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Kevin Benton
i.e. 'optimistic locking' as opposed to the 'pessimistic locking'
referenced in the 3rd link of the email starting the thread.


On Wed, Jul 30, 2014 at 9:55 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 07/30/2014 09:48 AM, Doug Wiegley wrote:

 I'd have to look at the Neutron code, but I suspect that a simple
 strategy of issuing the UPDATE SQL statement with a WHERE condition that


 I¹m assuming the locking is for serializing code, whereas for what you
 describe above, is there some reason we wouldn¹t just use a transaction?


 Because you can't do a transaction from two different threads...

 The compare and update strategy is for avoiding the use of SELECT FOR
 UPDATE.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Morgan Fainberg

-Original Message-
From: Jay Pipes jaypi...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 30, 2014 at 09:59:15
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [neutron] Cross-server locking for neutron server

 On 07/30/2014 09:48 AM, Doug Wiegley wrote:
  I'd have to look at the Neutron code, but I suspect that a simple
  strategy of issuing the UPDATE SQL statement with a WHERE condition that
 
  I¹m assuming the locking is for serializing code, whereas for what you
  describe above, is there some reason we wouldn¹t just use a transaction?
  
 Because you can't do a transaction from two different threads...
  
 The compare and update strategy is for avoiding the use of SELECT FOR
 UPDATE.
  
 Best,
 -jay


As a quick example of the optimistic locking you describe (UPDATE with WHERE 
clause) you can take a look at the Keystone “consume trust” logic:

https://review.openstack.org/#/c/97059/14/keystone/trust/backends/sql.py

Line 93 does the initial query, an update is performed then on line 108 and 115 
we do the update and check to see how many rows were affected.

Feel free to hit me up if I can help in any way on this.

Cheers,
Morgan 


—
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann



On 7/30/2014 6:43 AM, Daniel P. Berrange wrote:

On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:

This change:

https://review.openstack.org/#/c/105501/

Tries to pull in libvirt-python = 1.2.5 for testing.

I'm on Ubuntu Precise for development which has libvirt 0.9.8.

The latest libvirt-python appears to require libvirt = 0.9.11.

So do I have to move to Trusty?


You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.

Regards,
Daniel



Can we be more specific because this would also need to be updated in 
the devref docs for setting up your development environment with Ubuntu.


Sorry for being a newb but I went here:

https://wiki.ubuntu.com/ServerTeam/CloudArchive/

And tried doing:

sudo add-apt-repository cloud-archive:icehouse

Which failed, I guess it doesn't know about what that means or something?

I added a repo to 
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/icehouse/ 
manually but it's not finding any newer libvirt packages.


If I can get some help I can push a patch to update the docs since I'm 
assuming I won't be the only one that hits this and it sounds like 
minesweeper hit it recently too. [1]


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-30 Thread Matt Riedemann



On 7/30/2014 2:27 AM, Michele Paolino wrote:

On 30/07/2014 07:53, Matt Riedemann wrote:



On 7/25/2014 2:38 PM, Clark Boylan wrote:

Hello,

The recent release of tox 1.7.2 has fixed the {posargs} interpolation
issues we had with newer tox which forced us to be pinned to tox==1.6.1.
Before we can remove the pin and start telling people to use latest tox
we need to address a new default behavior in tox.

New tox sets a random PYTHONHASHSEED value by default. Arguably this is
a good thing as it forces you to write code that handles unknown hash
seeds, but unfortunately many projects' unittests don't currently deal
with this very well. A work around is to hard set a PYTHONHASHSEED of 0
in tox.ini files. I have begun to propose these changes to the projects
that I have tested and found to not handle random seeds. It would be
great if we could get these reviewed and merged so that infra can update
the version of tox used on our side.

I probably won't be able to test every single project and propose fixes
with backports to stable branches for everything. It would be a massive
help if individual projects tested and proposed fixes as necessary too
(these changes will need to be backported to stable branches). You can
test by running `tox -epy27` in your project with tox version 1.7.2. If
that fails add PYTHONHASHSEED=0 as in
https://review.openstack.org/#/c/109700/ and rerun `tox -epy27` to
confirm that succeeds.

This will get us over the immediate hump of the tox upgrade, but we
should also start work to make our tests run with random hashes. This
shouldn't be too hard to do as it will be a self gating change once
infra is able to update the version of tox used in the gate. Most of the
issues appear related to dict entry ordering. I have gone ahead and
created https://bugs.launchpad.net/cinder/+bug/1348818 to track this
work.

Thank you,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is this in any way related to the fact that tox is unable to
find/install the oslo alpha packages for me in nova right now (config,
messaging, rootwrap) after I rebased on master?  I had to go into
requirements.txt and remove the min versions on the alpha versions to
get tox to install dependencies for nova unit tests. I'm running with
tox 1.6.1 but not sure if that would be related anyhow.


Problem confirmed from my side. The error is:
Downloading/unpacking oslo.config=1.4.0.0a3 (from -r
/media/repos/nova/requirements.txt (line 34))
   Could not find a version that satisfies the requirement
oslo.config=1.4.0.0a3 (from -r /media/repos/nova/requirements.txt (line
34)) (from versions: 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.3.0)



It looks like you have to use the --pre option with pip install to get 
pre-release packages, but then why isn't that in every project's tox.ini 
that is using these alpha oslo packages?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] how to deprecate a plugin

2014-07-30 Thread YAMAMOTO Takashi
hi,

what's the right procedure to deprecate a plugin?  we (ryu team) are
considering deprecating ryu plugin, in favor of ofagent.  probably in
K-timeframe, if it's acceptable.

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [Solum] Stack update and raw_template backup

2014-07-30 Thread Clint Byrum
Excerpts from Anant Patil's message of 2014-07-29 23:21:05 -0700:
 On 28-Jul-14 22:37, Clint Byrum wrote:
  Excerpts from Zane Bitter's message of 2014-07-28 07:25:24 -0700:
  On 26/07/14 00:04, Anant Patil wrote:
  When the stack is updated, a diff of updated template and current
  template can be stored to optimize database.  And perhaps Heat should
  have an API to retrieve this history of templates for inspection etc.
  when the stack admin needs it.
 
  If there's a demand for that feature we could implement it, but it 
  doesn't easily fall out of the current implementation any more.
  
  We are never going to do it even 1/10th as well as git. In fact we won't
  even do it 1/0th as well as CVS.
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 Zane,
 I am working the defect you had filed, which would clean up backup stack
 along with the resources, templates and other data.
 
 However, I simply don't want to delete the templates for the same reason
 as we don't hard-delete the stack. Anyone who deploys a stack and
 updates it over time would want to view the the updates in the templates
 for debugging or auditing reasons. It is not fair to assume that every
 user has a VCS with him to store the templates. It is kind of
 inconvenience for me to not have the ability to view my updates in
 templates.
 

Sounds like a nice to have feature. I'd suggest you propose it as a
blueprint and spec. I will personally be against us spending time and
adding complexity for such a feature when it is so much better served
by VCS.

And I would also suggest that we _can_ assume that users have VCS. When
is the last time you encountered a developer or ops professional that
did not use at least some kind of VCS? For me, it was 2003, and it took
approximately 20 minutes to implement.

And if we want that as a service, I believe Solum is working on
doing that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann



On 7/30/2014 9:57 AM, Matt Riedemann wrote:



On 7/30/2014 9:20 AM, Matt Riedemann wrote:



On 7/30/2014 6:43 AM, Daniel P. Berrange wrote:

On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:

This change:

https://review.openstack.org/#/c/105501/

Tries to pull in libvirt-python = 1.2.5 for testing.

I'm on Ubuntu Precise for development which has libvirt 0.9.8.

The latest libvirt-python appears to require libvirt = 0.9.11.

So do I have to move to Trusty?


You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.

Regards,
Daniel



Can we be more specific because this would also need to be updated in
the devref docs for setting up your development environment with Ubuntu.

Sorry for being a newb but I went here:

https://wiki.ubuntu.com/ServerTeam/CloudArchive/

And tried doing:

sudo add-apt-repository cloud-archive:icehouse

Which failed, I guess it doesn't know about what that means or something?

I added a repo to
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/icehouse/

manually but it's not finding any newer libvirt packages.

If I can get some help I can push a patch to update the docs since I'm
assuming I won't be the only one that hits this and it sounds like
minesweeper hit it recently too. [1]

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html



Yay for docs team, I was missing this:

apt-get install python-software-properties

Found it here:

http://docs.openstack.org/havana/install-guide/install/apt/content/basics-packages.html


The devref env setup doc in nova should still probably be updated to say
something like, 'hey if you're on juno using precise you need to enable
cloud archive to update libvirt'.



Hopefully this helps people:

https://review.openstack.org/#/c/110720/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Kevin L. Mitchell
On Wed, 2014-07-30 at 09:01 +0200, Flavio Percoco wrote:
 As a stable-maint, I'm always hesitant to review patches I've no
 understanding on, hence I end up just checking how big is the patch,
 whether it adds/removes new configuration options etc but, the real
 review has to be done by someone with good understanding of the change.
 
 Something I've done in the past is adding the folks that had approved
 the patch on master to the stable/maint review. They should know that
 code already, which means it shouldn't take them long to review it. All
 the sanity checks should've been done already.
 
 With all that said, I'd be happy to give *-core approval permissions on
 stable branches, but I still think we need a dedicated team that has a
 final (or at least relevant) word on the patches.

Maybe what we need to do is give *-core permission to +2 the patches,
but only stable/maint team has *approval* permission.  Then, the cores
can review the code, and stable/maint only has to verify applicability
to the stable branch…
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes

On 07/30/2014 10:05 AM, Kevin Benton wrote:

i.e. 'optimistic locking' as opposed to the 'pessimistic locking'
referenced in the 3rd link of the email starting the thread.


No, there's no locking.


On Wed, Jul 30, 2014 at 9:55 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 07/30/2014 09:48 AM, Doug Wiegley wrote:

I'd have to look at the Neutron code, but I suspect that a
simple
strategy of issuing the UPDATE SQL statement with a WHERE
condition that


I¹m assuming the locking is for serializing code, whereas for
what you
describe above, is there some reason we wouldn¹t just use a
transaction?


Because you can't do a transaction from two different threads...

The compare and update strategy is for avoiding the use of SELECT
FOR UPDATE.

Best,
-jay



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] how to deprecate a plugin

2014-07-30 Thread Kyle Mestery
On Wed, Jul 30, 2014 at 12:17 PM, YAMAMOTO Takashi
yamam...@valinux.co.jp wrote:
 hi,

 what's the right procedure to deprecate a plugin?  we (ryu team) are
 considering deprecating ryu plugin, in favor of ofagent.  probably in
 K-timeframe, if it's acceptable.

The typical way is to announce the deprecation at least one cycle
before removing the deprecated plugin from the tree. So, you could
announce the ryu plugin is deprecated in Juno, and then remove it from
the tree in Kilo.

Thanks,
Kyle

 YAMAMOTO Takashi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [swift] Improving ceilometer.objectstore.swift_middleware

2014-07-30 Thread Samuel Merritt

On 7/30/14, 8:06 AM, Chris Dent wrote:


ceilometer/objectstore/swift_middleware.py[1] counts the size of web
request and reponse bodies through the swift proxy server and publishes
metrics of the size of the request and response and that a request
happened at all.

There are (at least) two bug reports associated with this bit of code:

* avoid requirement on tarball for unit tests
   https://bugs.launchpad.net/ceilometer/+bug/1285388

* significant performance degradation when ceilometer middleware for
   swift proxy uses
   https://bugs.launchpad.net/ceilometer/+bug/1337761

[snip]



Some options appear to be:

* Move the middleware to swift or move the functionality to swift.

   In the process make the functionality drop generic notifications for
   storage.objects.incoming.bytes and storage.objects.outgoing.bytes
   that anyone can consume, including ceilometer.

   This could potentially address both bugs.

* Move or copy swift.common.utils.{InputProxy,split_path} to somewhere
   in oslo, but keep the middleware in ceilometer.

   This would require somebody sharing the info on how to properly
   participate in swift's logging setup without incorporating swift.

   This would fix the first bug without saying anything about the
   second.

* Carry on importing the swift tarball or otherwise depending on
   swift.

   Fixes neither bug, maintains status quo.

What are other options? Of those above which are best or most
realistic?


Swift is already emitting those numbers[1] in statsd format; could 
ceilometer consume those metrics and convert them to whatever 
notification format it uses?


When configured to log to statsd, the Swift proxy will emit metrics of 
the form proxy-server.type.verb.status.xfer; for example, a 
successful object download would have a metric name of 
proxy-server.object.GET.200.xfer and a value of the number of bytes 
downloaded. Similarly, PUTs would look like 
proxy-server.object.PUT.2xx.xfer.


If ceilometer were to consume these metrics in a process outside the 
Swift proxy server, this would solve both problems. The performance fix 
comes by being outside the Swift proxy, and consuming statsd metrics can 
be done without pulling in Swift code[2].


[1] 
http://docs.openstack.org/developer/swift/admin_guide.html#reporting-metrics-to-statsd


[2] e.g. pystatsd

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-07-30 Thread Kyle Mestery
I wanted to send an email to let everyone know where we're at in the
Juno cycle. We're hitting our stride in Juno-3 development now, and we
have a lot of BPs targeted [1]. Due to this, I'm not going to approve
any more spec exceptions other than possibly flavors [2] and even less
possibly rootwrap [3] if the security implications can be worked out.
The reality is, we're severely oversubscribed as it is, and we won't
land even half of the approved BPs in Juno-3.

Also, for people with BPs approved for Juno-3, remember Neutron is
observing the Feature Proposal Freeze [4], which is August 21. Any BP
without code proposed by then will be deferred to Kilo.

As always, the dates for the Juno release can be found on the wiki here [5].

Thanks!
Kyle

[1] https://launchpad.net/neutron/+milestone/juno-3
[2] https://review.openstack.org/#/c/102723/
[3] https://review.openstack.org/#/c/93889/
[4] https://wiki.openstack.org/wiki/FeatureProposalFreeze
[5] https://wiki.openstack.org/wiki/NeutronJunoProjectPlan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack update and raw_template backup

2014-07-30 Thread Zane Bitter

On 30/07/14 02:21, Anant Patil wrote:

On 28-Jul-14 22:37, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2014-07-28 07:25:24 -0700:

On 26/07/14 00:04, Anant Patil wrote:

When the stack is updated, a diff of updated template and current
template can be stored to optimize database.  And perhaps Heat should
have an API to retrieve this history of templates for inspection etc.
when the stack admin needs it.


If there's a demand for that feature we could implement it, but it
doesn't easily fall out of the current implementation any more.


We are never going to do it even 1/10th as well as git. In fact we won't
even do it 1/0th as well as CVS.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Zane,
I am working the defect you had filed, which would clean up backup stack
along with the resources, templates and other data.

However, I simply don't want to delete the templates for the same reason
as we don't hard-delete the stack. Anyone who deploys a stack and
updates it over time would want to view the the updates in the templates
for debugging or auditing reasons.


As I mentioned, and as Ton mentioned in another thread, the old 
templates are useless for auditing (and to a large extent debugging) 
because the update process turns them into Frankentemplates that don't 
reflect either the old or the new template supplied by the user (though 
they're much closer to the new than the old).


So if you don't delete them, then you don't end up with a copy of the 
old template, you end up with a broken copy of the new template.



It is not fair to assume that every
user has a VCS with him to store the templates.


It most certainly is fair to assume that.

In addition, Glance has an artifact repository project already underway 
with the goal of providing versioned access to templates as part of 
OpenStack.


It's likely that not all users are making use of a VCS, but if they're 
not then I don't know why they bother using Heat. The whole point of the 
project is to provide a way for people to describe their infrastructure 
in a way that _can_ be managed in a VCS. Whenever we add new features, 
we always try to do so in a way that _encourages_ users to store their 
templates in a VCS and _discourages_ them from managing them in an ad 
hoc manner.



It is kind of
inconvenience for me to not have the ability to view my updates in
templates.


I tend to agree that it would be kind-of nice to have, but you're 
talking about it as if it's a simple matter of just not deleting the old 
template and sticking an API in front of it, rather than the major new 
development that it actually is.



We need not go as far as git or any VCS. Any library which can do a diff
and patch of text files can be used, like the google-diff-match-patch.


We don't store the original text of templates - in heat-engine we only 
get the object tree obtained by parsing the JSON or YAML. So the 
templates we store or could store currently are of not much use to 
(human) users.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread ZZelle
Hello,


I stop to improve vxlan population and remove SELECT FOR UPDATE[1] because
i am not sure the current approach is the right approach to handle
vxlan/gre tenant pools:

1- Do we really to populate vxlan/gre tenant pools?
  The neutron-server could also choose randomly an vxlan vni in vni_ranges
and tries to allocate it and retries until allocate success.
  I did not verify but mac address allocation should use the same
principle?
  It is efficient if used_vnis is small enough (50%) compared to
vni_ranges size.
  I am about to propose an update of neutron.plugins.ml2.drivers.helpers[2]
in this direction.

2- Do we need to populate/update vxlan/gre tenant pools on neutron-server
restart?
  A specific command could populate/update them (neutron-db-manage populate
/ neutron-db-populate)


Any thoughts?

[1] https://review.openstack.org/#/c/101982
[2]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py


Cedric
ZZelle@IRC


On Wed, Jul 30, 2014 at 7:30 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 07/30/2014 10:05 AM, Kevin Benton wrote:

 i.e. 'optimistic locking' as opposed to the 'pessimistic locking'
 referenced in the 3rd link of the email starting the thread.


 No, there's no locking.

  On Wed, Jul 30, 2014 at 9:55 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 07/30/2014 09:48 AM, Doug Wiegley wrote:

 I'd have to look at the Neutron code, but I suspect that a
 simple
 strategy of issuing the UPDATE SQL statement with a WHERE
 condition that


 I¹m assuming the locking is for serializing code, whereas for
 what you
 describe above, is there some reason we wouldn¹t just use a
 transaction?


 Because you can't do a transaction from two different threads...

 The compare and update strategy is for avoiding the use of SELECT
 FOR UPDATE.

 Best,
 -jay



 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Kevin Benton


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Kevin Benton
Using the UPDATE WHERE statement you described is referred to as optimistic
locking. [1]

https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html


On Wed, Jul 30, 2014 at 10:30 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 07/30/2014 10:05 AM, Kevin Benton wrote:

 i.e. 'optimistic locking' as opposed to the 'pessimistic locking'
 referenced in the 3rd link of the email starting the thread.


 No, there's no locking.

  On Wed, Jul 30, 2014 at 9:55 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 07/30/2014 09:48 AM, Doug Wiegley wrote:

 I'd have to look at the Neutron code, but I suspect that a
 simple
 strategy of issuing the UPDATE SQL statement with a WHERE
 condition that


 I¹m assuming the locking is for serializing code, whereas for
 what you
 describe above, is there some reason we wouldn¹t just use a
 transaction?


 Because you can't do a transaction from two different threads...

 The compare and update strategy is for avoiding the use of SELECT
 FOR UPDATE.

 Best,
 -jay



 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Kevin Benton


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes

On 07/30/2014 10:53 AM, Kevin Benton wrote:

Using the UPDATE WHERE statement you described is referred to as
optimistic locking. [1]

https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html


SQL != JBoss.

It's not optimistic locking in the database world. In the database 
world, optimistic locking is an entirely separate animal:


http://en.wikipedia.org/wiki/Lock_(database)

And what I am describing is not optimistic lock concurrency in databases.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Bridging the 2-group gap in group policy

2014-07-30 Thread Hemanth Ravi
Hi,

Adding this CLI command seems to be a good way to provide support for the
second model. This can be submitted as a new review patch to work through
the approaches to implement this. I suggest the current CLI patch [1] be
reviewed for the existing spec and completed.

Ryan, would it possible for you to start a new review submit for the new
command(s). Could you also provide any references for profiled API in
IETF, CCITT.

Thanks,
-hemanth

[1] https://review.openstack.org/#/c/104013


On Tue, Jul 29, 2014 at 3:16 PM, Ryan Moats rmo...@us.ibm.com wrote:

 As promised in Monday's Neutron IRC minutes [1], this mail is a trip down
 memory lane looking at the history of the
 Neutron GP project..  The original GP google doc [2] included specifying
 policy via both a produce/consume 1-group
 approach and as a link between two groups.  There was an email thread [3]
 that discussed the relationship between
 these models early on, but that discussion petered out and during a later
 IRC meeting [4] the concept of contracts
 were added, but without changing the basic use case requirements from the
 original document.  A followup meeting [5]
 began the discussion of how to express the original model from the
 contract data model but that discussion doesn't
 appear to have been completed either.  The PoC in Atlanta raised a set of
 issues [6],[7] around the complexity of the
 resulting PoC code.

 The good news is that having looked through the proposed GP code commits
 (links to which can be found at [8) I
 believe that folks that want to be able to specify policies via the
 2-group approach (and yes, I'm one of them) can have
 that without changing the model encoded in those commits. Rather, it can
 be done via the WiP CLI code commit by
 providing a profiled API - this is a technique used by the IETF, CCITT,
 etc. to allow a rich API to be consumed in
 common ways.  In this case, what I'm envisioning is something like

 neutron policy-apply [policy rule] [src group] [destination group]

 in this case, the CLI would perform the contract creation for the policy
 rule, and assigning the proper produce/consume
 edits to the specified source and destination groups.  Note:  this is in
 addition to the CLI providing direct access to the
 underlying data model.  I believe that this is the simplest way to bridge
 the gap and provide support to folks who want
 to specify policy as something between two groups.

 Ryan Moats (regXboi)

 References:
 [1]
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-07-28-21.02.log.txt
 [2]
 https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit#
 [3]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022150.html
 [4]
 http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-02-27-19.00.log.html
 [5]
 http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-03-20-19.00.log.html
 [6]
 http://lists.openstack.org/pipermail/openstack-dev/2014-May/035661.html
 [7]
 http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-05-22-18.01.log.html
 [8] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][nova] extra Ceilometer Samples of the instance gauge

2014-07-30 Thread Mike Spreitzer
In a normal DevStack install, each Compute instance causes one Ceilometer 
Sample every 10 minutes.  Except, there is an extra one every hour.  And a 
lot of extra ones at the start.  What's going on here?

For example:

$ ceilometer sample-list -m instance -q 
resource=9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab
+--+--+---++--++
| Resource ID  | Name | Type  | Volume | Unit  
  | Timestamp  |
+--+--+---++--++
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T18:09:28|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T18:00:54.009877 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T17:59:28|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T17:49:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T17:39:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T17:29:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T17:19:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T17:09:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T17:00:07.002075 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T16:59:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T16:49:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T16:39:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T16:29:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T16:19:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T16:09:26|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T16:00:20.172520 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T15:59:27|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T15:49:26|
...
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T05:19:21|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T05:09:21|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T05:00:41.909634 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:59:21|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:49:21|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:39:21|
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:55.049799 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:54.834377 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:51.905095 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:40.962977 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:40.201907 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:40.091348 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:39.858939 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:39.693631 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:39.523561 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:39.421295 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:39.411968 |
| 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
instance | 2014-07-30T04:32:38.916604 |
+--+--+---++--++

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Meeting summary 2014-07-30

2014-07-30 Thread Steve Gordon
Meeting summary

https://etherpad.openstack.org/p/nfv-meeting-agenda (russellb, 14:00:32)

review actions from last week (russellb, 14:00:48)
ACTION: bauzas to update list with link to new dash (sgordon, 14:02:34)
https://review.openstack.org/#/c/95805/2 (sgordon, 14:02:54)
https://review.openstack.org/#/c/107797/1 (sgordon, 14:03:00)

http://lists.openstack.org/pipermail/openstack-dev/2014-July/040660.html 
(sgordon, 14:03:25)

http://lists.openstack.org/pipermail/openstack-dev/2014-July/040877.html 
(sgordon, 14:03:30)

blueprints (sgordon, 14:09:00)
https://wiki.openstack.org/wiki/Meetings/NFV#Active_Blueprints 
(sgordon, 14:09:56)
https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net (sgordon, 
14:10:13)
multiple-if-1-net approved, code up for review (sgordon, 14:10:26)
https://review.openstack.org/#/c/98488/ (sgordon, 14:11:33)
https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks 
(sgordon, 14:12:59)
https://review.openstack.org/#/c/97714/ (sgordon, 14:13:17)
https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity 
(sgordon, 14:15:12)

Team goals and structure (sgordon, 14:16:33)
Broader communication with wider development (nova, neutron, etc.) 
communities required to illustrate what NFV is and is not, current state and 
progress (sgordon, 14:26:14)
Feedback from nova midcycle is that more needs to be done to front load 
the earlier release milestones late in the previous cycle (sgordon, 14:26:50)
Cross-project design session proposal for Kilo summit needs to be 
co-ordinated and have specific goals to be accepted (sgordon, 14:30:32)
ACTION: Bring possible topics/goals for cross-project session to next 
week's meeting. (sgordon, 14:33:44)

meeting times (sgordon, 14:39:10)
http://whenisgood.net/exzzbi8 (sgordon, 14:40:00)
please vote on future meeting times by EoW (sgordon, 14:40:10)

open discussion (sgordon, 14:44:27)
ACTION: sgordon confirm future meeting times at next week's meeting 
(sgordon, 14:46:51)
ACTION: adrian-hoban to follow up on M/L regarding potential for 
pre-loading the early kilo milestones (sgordon, 14:55:24)
ACTION: smazziotta to report on outcome of ETSI NFV gaps discussion at 
next week's meeting (sgordon, 14:56:06)

Full logs: 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-30-14.00.log.html

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Ready to change the meeting time?

2014-07-30 Thread Steve Gordon
- Original Message -
 From: Isaku Yamahata isaku.yamah...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 Hi Steve.
 
 The timeslot of 5:00AM UTC(Tuesday) 30min clashes with it.
 https://wiki.openstack.org/wiki/Meetings/ServiceVM
 Please disable this time slot.
 
 thanks,

Hi Isaku,

When I go to the link ( http://whenisgood.net/exzzbi8 ) and select GMT this 
time is already filtered out/disabled?

Thanks,

Steve

 On Tue, Jul 29, 2014 at 10:53:21AM -0400,
 Steve Gordon sgor...@redhat.com wrote:
 
  Hi all,
  
  I have recently had a few people express concern to me that the current
  meeting time is preventing their attendance at the meeting. As we're still
  using the original meeting time we discussed using for a trial period
  immediately after summit it is probably time we reassess anyway.
  
  I have been through the global iCal [1] and tried to identify times where
  at least one of the IRC meeting rooms is available and no other NFV
  related team or subteam (E.g. Nova, Neutron, DVR, L3, etc.) is meeting.
  The resultant times are available for voting on this whenisgood.net sheet
  - be sure to select your location to view in your local time:
  
  http://whenisgood.net/exzzbi8
  
  If you are a regular participant in the NFV meetings, or even more
  importantly if you would like to be but are restrained from doing so
  because of the current timing then please record your preferences above.
  If you think there is an available time slot that I've missed, or I've
  made a time slot available that actually clashes with a meeting relevant
  to NFV participants, then please respond on list!
  
  This week's meeting will proceed at the regular time on Wednesday, July 30
  at 1400 UTC in #openstack-meeting-alt.
  
  Thanks,
  
  Steve
  
  [1]
  https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 --
 Isaku Yamahata isaku.yamah...@gmail.com
 

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Joe Gordon
On Wed, Jul 30, 2014 at 6:43 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:
  This change:
 
  https://review.openstack.org/#/c/105501/
 
  Tries to pull in libvirt-python = 1.2.5 for testing.
 
  I'm on Ubuntu Precise for development which has libvirt 0.9.8.
 
  The latest libvirt-python appears to require libvirt = 0.9.11.
 
  So do I have to move to Trusty?

 You can use the CloudArchive repository to get newer libvirt and
 qemu packages for Precise, which is what anyone deploying the
 Ubuntu provided OpenStack packages would be doing.


I am not a fan of this approach the patch above along with [0], broke
Minesweeper [1] and Matt, I am worried that we will be breaking other folks
as well. I don't think we should force folks to upgrade to a newer version
of libvirt just to do some code cleanup. I think we should revert these
patches.

Increase the min required libvirt version to 0.9.11 since


we require that for libvirt-python from PyPI to build
successfully. Kill off the legacy CPU model configuration
and legacy OpenVSwitch setup code paths only required by
libvirt  0.9.11


[0] https://review.openstack.org/#/c/58494/
[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html



 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Bridging the 2-group gap in group policy

2014-07-30 Thread Cathy Zhang
Hi all,

I support this API proposal. It is simple and conveys clear semantics. Thanks 
Ryan!

Cathy

From: Hemanth Ravi [mailto:hemanthrav...@gmail.com]
Sent: Wednesday, July 30, 2014 11:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Subrahmanyam Ongole
Subject: Re: [openstack-dev] [neutron][policy] Bridging the 2-group gap in 
group policy

Hi,

Adding this CLI command seems to be a good way to provide support for the 
second model. This can be submitted as a new review patch to work through the 
approaches to implement this. I suggest the current CLI patch [1] be reviewed 
for the existing spec and completed.

Ryan, would it possible for you to start a new review submit for the new 
command(s). Could you also provide any references for profiled API in IETF, 
CCITT.

Thanks,
-hemanth

[1] https://review.openstack.org/#/c/104013

On Tue, Jul 29, 2014 at 3:16 PM, Ryan Moats 
rmo...@us.ibm.commailto:rmo...@us.ibm.com wrote:

As promised in Monday's Neutron IRC minutes [1], this mail is a trip down 
memory lane looking at the history of the
Neutron GP project..  The original GP google doc [2] included specifying policy 
via both a produce/consume 1-group
approach and as a link between two groups.  There was an email thread [3] that 
discussed the relationship between
these models early on, but that discussion petered out and during a later IRC 
meeting [4] the concept of contracts
were added, but without changing the basic use case requirements from the 
original document.  A followup meeting [5]
began the discussion of how to express the original model from the contract 
data model but that discussion doesn't
appear to have been completed either.  The PoC in Atlanta raised a set of 
issues [6],[7] around the complexity of the
resulting PoC code.

The good news is that having looked through the proposed GP code commits (links 
to which can be found at [8) I
believe that folks that want to be able to specify policies via the 2-group 
approach (and yes, I'm one of them) can have
that without changing the model encoded in those commits. Rather, it can be 
done via the WiP CLI code commit by
providing a profiled API - this is a technique used by the IETF, CCITT, etc. 
to allow a rich API to be consumed in
common ways.  In this case, what I'm envisioning is something like

neutron policy-apply [policy rule] [src group] [destination group]

in this case, the CLI would perform the contract creation for the policy rule, 
and assigning the proper produce/consume
edits to the specified source and destination groups.  Note:  this is in 
addition to the CLI providing direct access to the
underlying data model.  I believe that this is the simplest way to bridge the 
gap and provide support to folks who want
to specify policy as something between two groups.

Ryan Moats (regXboi)

References:
[1] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-07-28-21.02.log.txt
[2] 
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit#https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit
[3] http://lists.openstack.org/pipermail/openstack-dev/2013-December/022150.html
[4] 
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-02-27-19.00.log.html
[5] 
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-03-20-19.00.log.html
[6] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035661.html
[7] 
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-05-22-18.01.log.html
[8] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-07-30 Thread Ken Giusti
On Wed, 30 Jul 2014 14:25:46 +0100, Daniel P. Berrange wrote:
 On Wed, Jul 30, 2014 at 08:54:01AM -0400, Ken Giusti wrote:
  Greetings,
 
  Apologies for the cross-post: this should be of interest to both infra
  and olso.messaging developers.
 
  The blueprint [0] that adds support for version 1.0 of the AMQP messaging
  protocol is blocked due to CI test failures [1]. These failures are due
  to a new package dependency this blueprint adds to oslo.messaging.
 
  The AMQP 1.0 functionality is provided by the Apache Qpid's Proton
  AMQP 1.0 toolkit.  The blueprint uses the Python bindings for this
  toolkit, which are available on Pypi.  These bindings, however, include
  a C extension that depends on the Proton toolkit development libraries
  in order to build and install.  The lack of this toolkit is the cause
  of the blueprint's current CI failures.
 
  This toolkit is written in C, and thus requires platform-specific
  libraries.
 
  Now here's the problem: packages for Proton are not included by
  default in most distro's base repositories (yet).  The Apache Qpid
  team has provided packages for EPEL, and has a PPA available for
  Ubuntu.  Packages for Debian are also being proposed.
 
  I'm proposing this patch to openstack-infra/config to address the
  dependency problem [2].  It adds the proton toolkit packages to the
  common slave configuration.  Does this make sense?  Are there any
  better alternatives?

 For other cases where we need more native packages, we tyically
 use devstack to ensure they are installed. This is preferrable
 since it works for ordinary developers as well as the CI system.


Thanks Daniel.  It was my understanding - which may be wrong - that
having devstack install the 'out of band' packages would only help in
the case of the devstack-based integration tests, not in the case of
CI running the unit tests.  Is that indeed the case?

At this point, there are no integration tests that exercise the
driver.  However, the new unit tests include a test 'broker', which
allow the unit tests to fully exercise the new driver, right down to
the network.  That's a bonus of AMQP 1.0 - it can support peer-2-peer
messaging.

So its the new unit tests that have the 'hard' requirement of the
proton libraries.And mocking-out the proton libraries really
doesn't allow us to do any meaningful tests of the driver.

But if devstack is the preferred method for getting 'special case'
packages installed, would it be acceptable to have the new unit tests
run as a separate oslo.messaging integration test, and remove them
from the unit tests?

I'm open to any thoughts on how best to solve this, thanks.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp/pxe boot

2014-07-30 Thread Steve Gordon
- Original Message -
 From: Angelo Matarazzo matarazzoang...@gmail.com
 To: openstack-dev@lists.openstack.org
 
 Hi folks,
 I would add the pxe boot capability to Nova/libvirt and Horizon too.
 Currently, compute instances must be booted from images (or snapshots)
 stored in Glance or volumes stored in Cinder.
 Our idea (as you can find below) is already described there [1] [2] and
 aims to provide a design for booting compute instances from a PXE boot
 server, i.e. bypassing the image/snapshot/volume requirement.
 There is already  a open blueprint but I would want to register a new one
 because it has no update since 2013.
 https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe
 https://wiki.openstack.org/wiki/Nova/Blueprints/pxe-boot-instance
 What do you think?
 
 Thanks beforehand
 
 Angelo

Hi Angelo,

As you may have noticed I have previously commented on this blueprint and 
created the Wiki page to begin trying to collect ideas on this subject. 
Ultimately though for the use case I was looking at it ended up proving to be 
easier to store an iPXE image in glance and using that as the initial boot 
media - at least in the short term.

For Libvirt/KVM (and I assume other supported hypervisors) changing the guest 
configuration to support network boot would be easy enough but the wider 
question I think is how the PXE boot server is to be presented. Does it need to 
be on the primary interface managed by dnsmasq or secondary interface(s) 
attached to external provider networks? What are your thoughts on how to 
approach it?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][nova] extra Ceilometer Samples of the instance gauge

2014-07-30 Thread gordon chung
 In a normal DevStack install, each Compute instance causes one Ceilometer 
 Sample every 10 minutes.  Except, there is an extra one every hour.  And a 
 lot of extra ones at  the start.  What's going on here? 
instance is one meter which is generated through both polling and notifications 
(see *origin* column[1]). when you create/update/delete an instance in Nova, it 
will generate a set of notifications relating to the instance. Ceilometer 
listens to those notifications and generates samples from them.
[1] 
http://docs.openstack.org/developer/ceilometer/measurements.html#compute-nova

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann



On 7/30/2014 11:49 AM, Joe Gordon wrote:




On Wed, Jul 30, 2014 at 6:43 AM, Daniel P. Berrange berra...@redhat.com
mailto:berra...@redhat.com wrote:

On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:
  This change:
 
  https://review.openstack.org/#/c/105501/
 
  Tries to pull in libvirt-python = 1.2.5 for testing.
 
  I'm on Ubuntu Precise for development which has libvirt 0.9.8.
 
  The latest libvirt-python appears to require libvirt = 0.9.11.
 
  So do I have to move to Trusty?

You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.


I am not a fan of this approach the patch above along with [0], broke
Minesweeper [1] and Matt, I am worried that we will be breaking other
folks as well. I don't think we should force folks to upgrade to a newer
version of libvirt just to do some code cleanup. I think we should
revert these patches.

Increase the min required libvirt version to 0.9.11 since


we require that for libvirt-python from PyPI to build
successfully. Kill off the legacy CPU model configuration
and legacy OpenVSwitch setup code paths only required by
libvirt  0.9.11


[0] https://review.openstack.org/#/c/58494/
[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html


Regards,
Daniel
--
|: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o- http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So https://review.openstack.org/#/c/58494/ is new to me as of today.

The 0.9.8 on ubuntu precise broke me (and our internal CI system which 
is running against precise images, but that's internal so meh).  The 
gate is running against ubuntu trusty and I have a way forward on 
getting updated libvirt in ubuntu precise (with updated docs on how 
others can as well), which is a short-term fix until I move my dev 
environment to ubuntu trusty.


My bigger concern here was how this impacts RHEL 6.5 which I'm running 
Juno on, but looks like that has libvirt 0.10.2 so I'm good.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Meeting summary 2014-07-30

2014-07-30 Thread Gary Kotton
Hi,
There are a few other issues which may be related:
- https://review.openstack.org/103091
- https://review.openstack.org/103094
They are both related to the attachment of new interfaces to a VM
Thanks
Gary

On 7/30/14, 9:38 PM, Steve Gordon sgor...@redhat.com wrote:

Meeting summary

https://etherpad.openstack.org/p/nfv-meeting-agenda (russellb,
14:00:32)

review actions from last week (russellb, 14:00:48)
ACTION: bauzas to update list with link to new dash (sgordon,
14:02:34)
https://review.openstack.org/#/c/95805/2 (sgordon, 14:02:54)
https://review.openstack.org/#/c/107797/1 (sgordon, 14:03:00)

http://lists.openstack.org/pipermail/openstack-dev/2014-July/040660.html
(sgordon, 14:03:25)

http://lists.openstack.org/pipermail/openstack-dev/2014-July/040877.html
(sgordon, 14:03:30)

blueprints (sgordon, 14:09:00)
https://wiki.openstack.org/wiki/Meetings/NFV#Active_Blueprints
(sgordon, 14:09:56)

https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
t/nova/%2Bspec/multiple-if-1-netk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0
pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=4hyUls13ARfPKXhxLZ3RCDg
0%2FLV9bMof84KfsAlukEI%3D%0As=2957d6d18d8b4344978647ddb83e5cee9c8ece19525
97d8d83d05cbe9bd818c6 (sgordon, 14:10:13)
multiple-if-1-net approved, code up for review (sgordon, 14:10:26)
https://review.openstack.org/#/c/98488/ (sgordon, 14:11:33)

https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
t/neutron/%2Bspec/nfv-vlan-trunksk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH
0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=4hyUls13ARfPKXhxLZ3RCD
g0%2FLV9bMof84KfsAlukEI%3D%0As=90a365e2e3c3ae1d5b298e474c064d35a954edf1c9
36780222d25fe3c6794774 (sgordon, 14:12:59)
https://review.openstack.org/#/c/97714/ (sgordon, 14:13:17)

https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
t/neutron/%2Bspec/ml2-ovs-portsecurityk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A
r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=4hyUls13ARfPKXhxL
Z3RCDg0%2FLV9bMof84KfsAlukEI%3D%0As=4c7413f4dcc18319edae3375e014a3d741728
39f92ec410bf3a3ee8703c17d5a (sgordon, 14:15:12)

Team goals and structure (sgordon, 14:16:33)
Broader communication with wider development (nova, neutron,
etc.) communities required to illustrate what NFV is and is not, current
state and progress (sgordon, 14:26:14)
Feedback from nova midcycle is that more needs to be done to
front load the earlier release milestones late in the previous cycle
(sgordon, 14:26:50)
Cross-project design session proposal for Kilo summit needs to be
co-ordinated and have specific goals to be accepted (sgordon, 14:30:32)
ACTION: Bring possible topics/goals for cross-project session to
next week's meeting. (sgordon, 14:33:44)

meeting times (sgordon, 14:39:10)

https://urldefense.proofpoint.com/v1/url?u=http://whenisgood.net/exzzbi8k
=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPh
CZFxPEq8%3D%0Am=4hyUls13ARfPKXhxLZ3RCDg0%2FLV9bMof84KfsAlukEI%3D%0As=aaf
f33561d0d9b00650f18becbcc80c4fdbd5beb27af3239f79b6abdda8f51b9 (sgordon,
14:40:00)
please vote on future meeting times by EoW (sgordon, 14:40:10)

open discussion (sgordon, 14:44:27)
ACTION: sgordon confirm future meeting times at next week's
meeting (sgordon, 14:46:51)
ACTION: adrian-hoban to follow up on M/L regarding potential for
pre-loading the early kilo milestones (sgordon, 14:55:24)
ACTION: smazziotta to report on outcome of ETSI NFV gaps
discussion at next week's meeting (sgordon, 14:56:06)

Full logs: 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-30-14.00.log.
html

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] how to deprecate a plugin

2014-07-30 Thread Gary Kotton
In Nova we have done the following:
- add a warning in the current version that this will be deprecated in K
- start of K drop the code
It is also important to have an upgrade path. What happens if I have my
RYU plugin in production and I upgrade to the next version. That should be
clearly documented.
I think that maybe we should consider on working on a migration scheme -
from plugin A to plugin B.
Thanks
Gary

On 7/30/14, 8:17 PM, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:

hi,

what's the right procedure to deprecate a plugin?  we (ryu team) are
considering deprecating ryu plugin, in favor of ofagent.  probably in
K-timeframe, if it's acceptable.

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Bridging the 2-group gap in group policy

2014-07-30 Thread Mandeep Dhami
Hi Ryan:

As I stated in the patch review, the suggestion to use a profiled API
like IETF/CCITT is indeed very interesting. As a profiled API has not
been tried with any neutron model before, and as there is no existing
design pattern/best practices for how best to structure that, my
recommendation is to create a new patch (dependent on this patch) to try
that experiment.

That patch will also clarify what is meant you mean by a profiled API and
how that might interact with other openstack services like Heat and Horizon.

Regards,
Mandeep
-



On Wed, Jul 30, 2014 at 11:13 AM, Hemanth Ravi hemanthrav...@gmail.com
wrote:

 Hi,

 Adding this CLI command seems to be a good way to provide support for the
 second model. This can be submitted as a new review patch to work through
 the approaches to implement this. I suggest the current CLI patch [1] be
 reviewed for the existing spec and completed.

 Ryan, would it possible for you to start a new review submit for the new
 command(s). Could you also provide any references for profiled API in
 IETF, CCITT.

 Thanks,
 -hemanth

 [1] https://review.openstack.org/#/c/104013


 On Tue, Jul 29, 2014 at 3:16 PM, Ryan Moats rmo...@us.ibm.com wrote:

 As promised in Monday's Neutron IRC minutes [1], this mail is a trip
 down memory lane looking at the history of the
 Neutron GP project..  The original GP google doc [2] included specifying
 policy via both a produce/consume 1-group
 approach and as a link between two groups.  There was an email thread [3]
 that discussed the relationship between
 these models early on, but that discussion petered out and during a later
 IRC meeting [4] the concept of contracts
 were added, but without changing the basic use case requirements from the
 original document.  A followup meeting [5]
 began the discussion of how to express the original model from the
 contract data model but that discussion doesn't
 appear to have been completed either.  The PoC in Atlanta raised a set of
 issues [6],[7] around the complexity of the
 resulting PoC code.

 The good news is that having looked through the proposed GP code commits
 (links to which can be found at [8) I
 believe that folks that want to be able to specify policies via the
 2-group approach (and yes, I'm one of them) can have
 that without changing the model encoded in those commits. Rather, it can
 be done via the WiP CLI code commit by
 providing a profiled API - this is a technique used by the IETF, CCITT,
 etc. to allow a rich API to be consumed in
 common ways.  In this case, what I'm envisioning is something like

 neutron policy-apply [policy rule] [src group] [destination group]

 in this case, the CLI would perform the contract creation for the policy
 rule, and assigning the proper produce/consume
 edits to the specified source and destination groups.  Note:  this is in
 addition to the CLI providing direct access to the
 underlying data model.  I believe that this is the simplest way to
 bridge the gap and provide support to folks who want
 to specify policy as something between two groups.

 Ryan Moats (regXboi)

 References:
 [1]
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-07-28-21.02.log.txt
 [2]
 https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit#
 [3]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022150.html
 [4]
 http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-02-27-19.00.log.html
 [5]
 http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-03-20-19.00.log.html
 [6]
 http://lists.openstack.org/pipermail/openstack-dev/2014-May/035661.html
 [7]
 http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-05-22-18.01.log.html
 [8] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Kevin Benton
Maybe I misunderstood your approach then.

I though you were suggesting where a node performs an UPDATE record WHERE
record = last_state_node_saw query and then checks the number of affected
rows. That's optimistic locking by every definition I've heard of it. It
matches the following statement from the wiki article you linked to as well:

The latter situation (optimistic locking) is only appropriate when there
is less chance of someone needing to access the record while it is locked;
otherwise it cannot be certain that the update will succeed because the
attempt to update the record will fail if another user updates the record
first.

Did I misinterpret how your approach works?


On Wed, Jul 30, 2014 at 11:07 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 07/30/2014 10:53 AM, Kevin Benton wrote:

 Using the UPDATE WHERE statement you described is referred to as
 optimistic locking. [1]

 https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/
 The_CMP_Engine-Optimistic_Locking.html


 SQL != JBoss.

 It's not optimistic locking in the database world. In the database world,
 optimistic locking is an entirely separate animal:

 http://en.wikipedia.org/wiki/Lock_(database)

 And what I am describing is not optimistic lock concurrency in databases.

 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] objects notifications

2014-07-30 Thread Gary Kotton


On 7/30/14, 7:26 AM, Dan Smith d...@danplanet.com wrote:

 When reviewing https://review.openstack.org/#/c/107954/ it occurred to
 me that maybe we should consider having some kind of generic object
 wrapper that could do notifications for objects. Any thoughts on this?

I think it might be good to do this in a repeatable, but perhaps not
totally automatic way. I can see that any time instance gets changed in
certain ways, that we'd want a notification about it. However, there are
probably some cases that don't fit that. For example,
instance.system_metadata is mostly private to nova I think, so I'm not
sure we'd want to emit a notification for that. Plus, we'd probably end
up with some serious duplication if we just do it implicitly.

What if we provided a way to declare the fields of an object that we
want to trigger a notification? Something like:

I think that this is a nice idea that we should look into


  NOTIFICATION_FIELDS = ['host', 'metadata', ...]

  @notify_on_save(NOTIFICATION_FIELDS)
  @base.remotable
  def save(context):
  ...

--Dan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Gary Kotton


On 7/30/14, 8:22 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com
wrote:

On Wed, 2014-07-30 at 09:01 +0200, Flavio Percoco wrote:
 As a stable-maint, I'm always hesitant to review patches I've no
 understanding on, hence I end up just checking how big is the patch,
 whether it adds/removes new configuration options etc but, the real
 review has to be done by someone with good understanding of the change.
 
 Something I've done in the past is adding the folks that had approved
 the patch on master to the stable/maint review. They should know that
 code already, which means it shouldn't take them long to review it. All
 the sanity checks should've been done already.
 
 With all that said, I'd be happy to give *-core approval permissions on
 stable branches, but I still think we need a dedicated team that has a
 final (or at least relevant) word on the patches.

Maybe what we need to do is give *-core permission to +2 the patches,
but only stable/maint team has *approval* permission.  Then, the cores
can review the code, and stable/maint only has to verify applicability
to the stable branchŠ

+1

-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-07-30 Thread Mandeep Dhami
Also, can I recommend that to avoid last minute rush of all the code in
Juno-3 (and then clogging up the gate at that time), we work as a team to
re-review patches that have addressed all previously identified issues?

For example, the for the GBP plugin, the first series of patches have been
updated to address all issues that were identified, and doing
re-review/merge now would reduce the load near the end of the cycle.

Regards,
Mandeep
-





On Wed, Jul 30, 2014 at 10:52 AM, Kyle Mestery mest...@mestery.com wrote:

 I wanted to send an email to let everyone know where we're at in the
 Juno cycle. We're hitting our stride in Juno-3 development now, and we
 have a lot of BPs targeted [1]. Due to this, I'm not going to approve
 any more spec exceptions other than possibly flavors [2] and even less
 possibly rootwrap [3] if the security implications can be worked out.
 The reality is, we're severely oversubscribed as it is, and we won't
 land even half of the approved BPs in Juno-3.

 Also, for people with BPs approved for Juno-3, remember Neutron is
 observing the Feature Proposal Freeze [4], which is August 21. Any BP
 without code proposed by then will be deferred to Kilo.

 As always, the dates for the Juno release can be found on the wiki here
 [5].

 Thanks!
 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-3
 [2] https://review.openstack.org/#/c/102723/
 [3] https://review.openstack.org/#/c/93889/
 [4] https://wiki.openstack.org/wiki/FeatureProposalFreeze
 [5] https://wiki.openstack.org/wiki/NeutronJunoProjectPlan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Russell Bryant
On 07/30/2014 01:22 PM, Kevin L. Mitchell wrote:
 On Wed, 2014-07-30 at 09:01 +0200, Flavio Percoco wrote:
 As a stable-maint, I'm always hesitant to review patches I've no
 understanding on, hence I end up just checking how big is the patch,
 whether it adds/removes new configuration options etc but, the real
 review has to be done by someone with good understanding of the change.

 Something I've done in the past is adding the folks that had approved
 the patch on master to the stable/maint review. They should know that
 code already, which means it shouldn't take them long to review it. All
 the sanity checks should've been done already.

 With all that said, I'd be happy to give *-core approval permissions on
 stable branches, but I still think we need a dedicated team that has a
 final (or at least relevant) word on the patches.
 
 Maybe what we need to do is give *-core permission to +2 the patches,
 but only stable/maint team has *approval* permission.  Then, the cores
 can review the code, and stable/maint only has to verify applicability
 to the stable branch…
 

We could also just do this by convention.  We already allow a single +2
on backports if the person who did the backport is also on stable-maint.

We could add to that allowing a single +2/+A if the person who did the
backport is on project-core, or if a person from project-core has given
a +1.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-07-30 Thread Boris Pavlovic
Hi all,

This thread is very useful. We've detect issue related to the mission
statement and name of proposed program on early steps. Seems like mission
statement and name are totally unclear and don't present in the right
perspective goals of this program.

I updated name and mission statement:

name:
SLA Management

mission:
Provide SLA Management for production OpenStack clouds. This includes
measuring and tracking performance of OpenStack Services, key API
methods
and cloud applications, performance and functional tests on demand, and
everything that is required to detect and debug issues in live
production clouds.

As well, I updated patch to governance:
https://review.openstack.org/#/c/108502/3


I hope now it's more clear, what is the goal of this program and why we
should add new program.

Thoughts?

Best regards,
Boris Pavlovic


On Tue, Jul 29, 2014 at 12:39 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi Sean,

 I appreciate you valuing Rally so highly as to suggesting it should join
 the QA program. It is a great vote of confidence for me. While I believe
 that Rally  and Tempest will always work closely together, the intended
 utility and the direction of where we are planing to take Rally will not be
 compatible with the direction of where I think the QA program is going.
 Please let me explain in more details below.

 Tempest is a collection of Functional and Performance Tests which is used
 by the developers to improve the quality of the OpenStack code.

 Rally on the other hand, is envisioned as a Tool that is going to be run
 by the cloud operators in order to measure, tune and continuously improve
 the performance of an OpenStack cloud.  Moreover, we have an SLA module
 that allows the Operator to define what constitutes an acceptable level of
 performance and a profiler that would provide both the user and the
 developer the diagnostic set of performance data.  Finally, Rally is
 designed to run on production clouds and to be integrated as a Horizon
 plugin.

 In the future, we envision integrating Rally with other services (e.g.
 Logging as a Service, Satori, Rubick, and other operator-targeted
 services). I believe that this is not the direction compatible with the
 mission of the the QA program .

 Before applying for a new Performance and Scalability program, we have
 thought that the best existing program that Rally could be a part of now
 and in the future is the Telemetry program. We have discussed with Eoghan
 Glynn the idea of extending the scope of its mission to include other
 operator related projects and include Rally to it. Eoghan liked the idea in
 general but felt that Ceilometer currently has too much on its plate and
 was not in a position to merge in a new project. However, I can still see
 the two programs maturing and potentially becoming one down the road.

 Now, regarding the point that you make of Rally and Tempest doing some
 duplicate work. I completely agree with you that we should avoid it as much
 as possible and we should stay in close communication to make sure that
 duplicate requirements are only implemented once.

 Following our earlier discussion, Rally is now using Tempest for those
 benchmarks that do not require special complex environments, we also
 encapsulated and automated Tempest usage to make it more accessible for the
 Operators (here is the Blog documenting it --
 http://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
 ).

 We would like to further continue to de-duplicate the work inside Tempest
 and Rally. We made some joint design decisions in Atlanta to transfer some
 of the Integration code from Rally to Tempest, resulting in the work
 performed by Andrew Kurilin (https://review.openstack.org/#/c/94473/). I
 would encourage and welcome more of such cooperation in the future.

 I trust that this addresses most of your concerns and please do not
 hesitate to bring up more questions and suggestions.

 Sincerely,

 Boris


 On Sun, Jul 27, 2014 at 6:57 PM, Sean Dague s...@dague.net wrote:

 On 07/26/2014 05:51 PM, Hayes, Graham wrote:
  On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:
  On 07/22/2014 11:58 AM, David Kranz wrote:
  On 07/22/2014 10:44 AM, Sean Dague wrote:
  Honestly, I'm really not sure I see this as a different program, but
 is
  really something that should be folded into the QA program. I feel
 like
  a top level effort like this is going to lead to a lot of
 duplication in
  the data analysis that's currently going on, as well as functionality
  for better load driver UX.
 
 -Sean
  +1
  It will also lead to pointless discussions/arguments about which
  activities are part of QA and which are part of
  Performance and Scalability Testing.
 
  I think that those discussions will still take place, it will just be on
  a per repository basis, instead of a per program one.
 
  [snip]
 
 
  Right, 100% agreed. Rally would remain with it's own repo + review
 team,
  just like 

Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Joe Gordon
On Wed, Jul 30, 2014 at 12:11 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:



 On 7/30/2014 11:49 AM, Joe Gordon wrote:




 On Wed, Jul 30, 2014 at 6:43 AM, Daniel P. Berrange berra...@redhat.com
 mailto:berra...@redhat.com wrote:

 On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:
   This change:
  
   https://review.openstack.org/#/c/105501/
  
   Tries to pull in libvirt-python = 1.2.5 for testing.
  
   I'm on Ubuntu Precise for development which has libvirt 0.9.8.
  
   The latest libvirt-python appears to require libvirt = 0.9.11.
  
   So do I have to move to Trusty?

 You can use the CloudArchive repository to get newer libvirt and
 qemu packages for Precise, which is what anyone deploying the
 Ubuntu provided OpenStack packages would be doing.


 I am not a fan of this approach the patch above along with [0], broke
 Minesweeper [1] and Matt, I am worried that we will be breaking other
 folks as well. I don't think we should force folks to upgrade to a newer
 version of libvirt just to do some code cleanup. I think we should
 revert these patches.

 Increase the min required libvirt version to 0.9.11 since


 we require that for libvirt-python from PyPI to build
 successfully. Kill off the legacy CPU model configuration
 and legacy OpenVSwitch setup code paths only required by
 libvirt  0.9.11


 [0] https://review.openstack.org/#/c/58494/
 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-
 July/041457.html


 Regards,
 Daniel
 --
 |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o- http://live.gnome.org/gtk-vnc
 :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 So https://review.openstack.org/#/c/58494/ is new to me as of today.

 The 0.9.8 on ubuntu precise broke me (and our internal CI system which is
 running against precise images, but that's internal so meh).  The gate is
 running against ubuntu trusty and I have a way forward on getting updated
 libvirt in ubuntu precise (with updated docs on how others can as well),
 which is a short-term fix until I move my dev environment to ubuntu trusty.

 My bigger concern here was how this impacts RHEL 6.5 which I'm running
 Juno on, but looks like that has libvirt 0.10.2 so I'm good.



While forcing people to move to a newer version of libvirt is doable on
most environments, do we want to do that now? What is the benefit of doing
so? Is it ok to do without a deprecation cycle?

Proposed revert patches:

https://review.openstack.org/110773
https://review.openstack.org/110774



 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-30 Thread Rochelle.RochelleGrober
 From: Gary Kotton [mailto:gkot...@vmware.com] 
 
 On 7/30/14, 8:22 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com
 wrote:
 
 On Wed, 2014-07-30 at 09:01 +0200, Flavio Percoco wrote:
  As a stable-maint, I'm always hesitant to review patches I've no
  understanding on, hence I end up just checking how big is the patch,
  whether it adds/removes new configuration options etc but, the real
  review has to be done by someone with good understanding of the
 change.
 
  Something I've done in the past is adding the folks that had
 approved
  the patch on master to the stable/maint review. They should know
 that
  code already, which means it shouldn't take them long to review it.
 All
  the sanity checks should've been done already.
 
  With all that said, I'd be happy to give *-core approval permissions
 on
  stable branches, but I still think we need a dedicated team that has
 a
  final (or at least relevant) word on the patches.
 
 Maybe what we need to do is give *-core permission to +2 the patches,
 but only stable/maint team has *approval* permission.  Then, the cores
 can review the code, and stable/maint only has to verify applicability
 to the stable branchŠ
 
 +1


+1

This approach guarantees final say by the stable/maint team, but lets any core 
validate that the patch is appropriate from the project's technical 
perspective.  It keeps the balance but broadens the validation pool.

--Rocky 

 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes

On 07/30/2014 12:21 PM, Kevin Benton wrote:

Maybe I misunderstood your approach then.

I though you were suggesting where a node performs an UPDATE record
WHERE record = last_state_node_saw query and then checks the number of
affected rows. That's optimistic locking by every definition I've heard
of it. It matches the following statement from the wiki article you
linked to as well:

The latter situation (optimistic locking) is only appropriate when
there is less chance of someone needing to access the record while it is
locked; otherwise it cannot be certain that the update will succeed
because the attempt to update the record will fail if another user
updates the record first.

Did I misinterpret how your approach works?


The record is never locked in my approach, why is why I don't like to 
think of it as optimistic locking. It's more like optimistic read and 
update with retry if certain conditions continue to be met... :)


To be very precise, the record is never locked explicitly -- either 
through the use of SELECT FOR UPDATE or some explicit file or 
distributed lock. InnoDB won't even hold a lock on anything, as it will 
simply add a new version to the row using its MGCC (sometimes called 
MVCC) methods.


The technique I am showing in the patch relies on the behaviour of the 
SQL UPDATE statement with a WHERE clause that contains certain columns 
and values from the original view of the record. The behaviour of the 
UPDATE statement will be a NOOP when some other thread has updated the 
record in between the time that the first thread read the record, and 
the time the first thread attempted to update the record. The caller of 
UPDATE can detect this NOOP by checking the number of affected rows, and 
retry the UPDATE if certain conditions remain kosher...


So, there's actually no locks taken in the entire process, which is why 
I object to the term optimistic locking :) I think where the confusion 
has been is that the initial SELECT and the following UPDATE statements 
are *not* done in the context of a single SQL transaction...


Best,
-jay


On Wed, Jul 30, 2014 at 11:07 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 07/30/2014 10:53 AM, Kevin Benton wrote:

Using the UPDATE WHERE statement you described is referred to as
optimistic locking. [1]


https://docs.jboss.org/__jbossas/docs/Server___Configuration_Guide/4/html/__The_CMP_Engine-Optimistic___Locking.html

https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html


SQL != JBoss.

It's not optimistic locking in the database world. In the database
world, optimistic locking is an entirely separate animal:

http://en.wikipedia.org/wiki/__Lock_(database)
http://en.wikipedia.org/wiki/Lock_(database)

And what I am describing is not optimistic lock concurrency in
databases.

-jay



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-07-30 13:53:38 -0700:
 On 07/30/2014 12:21 PM, Kevin Benton wrote:
  Maybe I misunderstood your approach then.
 
  I though you were suggesting where a node performs an UPDATE record
  WHERE record = last_state_node_saw query and then checks the number of
  affected rows. That's optimistic locking by every definition I've heard
  of it. It matches the following statement from the wiki article you
  linked to as well:
 
  The latter situation (optimistic locking) is only appropriate when
  there is less chance of someone needing to access the record while it is
  locked; otherwise it cannot be certain that the update will succeed
  because the attempt to update the record will fail if another user
  updates the record first.
 
  Did I misinterpret how your approach works?
 
 The record is never locked in my approach, why is why I don't like to 
 think of it as optimistic locking. It's more like optimistic read and 
 update with retry if certain conditions continue to be met... :)
 
 To be very precise, the record is never locked explicitly -- either 
 through the use of SELECT FOR UPDATE or some explicit file or 
 distributed lock. InnoDB won't even hold a lock on anything, as it will 
 simply add a new version to the row using its MGCC (sometimes called 
 MVCC) methods.
 
 The technique I am showing in the patch relies on the behaviour of the 
 SQL UPDATE statement with a WHERE clause that contains certain columns 
 and values from the original view of the record. The behaviour of the 
 UPDATE statement will be a NOOP when some other thread has updated the 
 record in between the time that the first thread read the record, and 
 the time the first thread attempted to update the record. The caller of 
 UPDATE can detect this NOOP by checking the number of affected rows, and 
 retry the UPDATE if certain conditions remain kosher...
 
 So, there's actually no locks taken in the entire process, which is why 
 I object to the term optimistic locking :) I think where the confusion 
 has been is that the initial SELECT and the following UPDATE statements 
 are *not* done in the context of a single SQL transaction...
 

This is all true at a low level Jay. But if you're serializing something
outside the DB by using the doing it versus done it state, it still
acts like a lock.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-07-30 Thread Michael Still
Greetings,

I would like to nominate Jay Pipes for the nova-core team.

Jay has been involved with nova for a long time now.  He's previously
been a nova core, as well as a glance core (and PTL). He's been around
so long that there are probably other types of core status I have
missed.

Please respond with +1s or any concerns.

References:

  https://review.openstack.org/#/q/owner:%22jay+pipes%22+status:open,n,z

  https://review.openstack.org/#/q/reviewer:%22jay+pipes%22,n,z

  http://stackalytics.com/?module=nova-groupuser_id=jaypipes

As a reminder, we use the voting process outlined at
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
core team.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Eugene Nikanorov
In fact there are more applications for distributed locking than just
accessing data in database.
One of such use cases is serializing access to devices.
This is what is not yet hardly needed, but will be as we get more service
drivers working with appliances.

It would be great if some existing library could be adopted for it.

Thanks,
Eugene.


On Thu, Jul 31, 2014 at 12:53 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 07/30/2014 12:21 PM, Kevin Benton wrote:

 Maybe I misunderstood your approach then.

 I though you were suggesting where a node performs an UPDATE record
 WHERE record = last_state_node_saw query and then checks the number of
 affected rows. That's optimistic locking by every definition I've heard
 of it. It matches the following statement from the wiki article you
 linked to as well:

 The latter situation (optimistic locking) is only appropriate when
 there is less chance of someone needing to access the record while it is
 locked; otherwise it cannot be certain that the update will succeed
 because the attempt to update the record will fail if another user
 updates the record first.

 Did I misinterpret how your approach works?


 The record is never locked in my approach, why is why I don't like to
 think of it as optimistic locking. It's more like optimistic read and
 update with retry if certain conditions continue to be met... :)

 To be very precise, the record is never locked explicitly -- either
 through the use of SELECT FOR UPDATE or some explicit file or distributed
 lock. InnoDB won't even hold a lock on anything, as it will simply add a
 new version to the row using its MGCC (sometimes called MVCC) methods.

 The technique I am showing in the patch relies on the behaviour of the SQL
 UPDATE statement with a WHERE clause that contains certain columns and
 values from the original view of the record. The behaviour of the UPDATE
 statement will be a NOOP when some other thread has updated the record in
 between the time that the first thread read the record, and the time the
 first thread attempted to update the record. The caller of UPDATE can
 detect this NOOP by checking the number of affected rows, and retry the
 UPDATE if certain conditions remain kosher...

 So, there's actually no locks taken in the entire process, which is why I
 object to the term optimistic locking :) I think where the confusion has
 been is that the initial SELECT and the following UPDATE statements are
 *not* done in the context of a single SQL transaction...

 Best,
 -jay

  On Wed, Jul 30, 2014 at 11:07 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 07/30/2014 10:53 AM, Kevin Benton wrote:

 Using the UPDATE WHERE statement you described is referred to as
 optimistic locking. [1]

 https://docs.jboss.org/__jbossas/docs/Server___
 Configuration_Guide/4/html/__The_CMP_Engine-Optimistic___Locking.html

 https://docs.jboss.org/jbossas/docs/Server_
 Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html


 SQL != JBoss.

 It's not optimistic locking in the database world. In the database
 world, optimistic locking is an entirely separate animal:

 http://en.wikipedia.org/wiki/__Lock_(database)

 http://en.wikipedia.org/wiki/Lock_(database)

 And what I am describing is not optimistic lock concurrency in
 databases.

 -jay



 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Kevin Benton


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-07-30 Thread Russell Bryant
On 07/30/2014 05:02 PM, Michael Still wrote:
 Greetings,
 
 I would like to nominate Jay Pipes for the nova-core team.
 
 Jay has been involved with nova for a long time now.  He's previously
 been a nova core, as well as a glance core (and PTL). He's been around
 so long that there are probably other types of core status I have
 missed.
 
 Please respond with +1s or any concerns.

+1

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >