[openstack-dev] [Tripleo][Heat] Heat is not able to create swift cloud server

2014-07-21 Thread Peeyush Gupta
Hi all,

I have been trying to set up tripleo using instack with RDO.
Now, when deploying overcloud, the script is failing consistently
with CREATE_FAILED error:

+ heat stack-create -f overcloud.yaml -P 
AdminToken=efe958561450ba61d7ef8249d29b0be1ba95dc11 -P 
AdminPassword=2b919f2ac7790ca1053ac58bc4621ca0967a0cba -P 
CinderPassword=e7d61883a573a3dffc65a5fb958c94686baac848 -P 
GlancePassword=cb896d6392e08241d504f3a0a2b489fc6f2612dd -P 
HeatPassword=7a3138ef58365bb666cb30c8377447b74e75a0ef -P 
NeutronPassword=4480ec8f2e004be4b06d14e1e228d882e18b3c2c -P 
NovaPassword=e4a34b6caeeb7dbc497fb1c557a396c422b4d103 -P 
NeutronPublicInterface=eth0 -P 
SwiftPassword=ed3761a03959e0d636b8d6fc826103734069f9dc -P 
SwiftHashSuffix=1a26593813bb7d6b38418db747b4243d4f1b5a56 -P 
NovaComputeLibvirtType=qemu -P 'GlanceLogFile='\'''\''' -P 
NeutronDnsmasqOptions=dhcp-option-force=26,1400 overcloud
+--+++--+
| id                                   | stack_name | stack_status       | 
creation_time        |
+--+++--+
| 737ada9f-aa45-45b6-a42b-c0a496d2407e | overcloud  | CREATE_IN_PROGRESS | 
2014-07-21T06:02:22Z |
+--+++--+
+ tripleo wait_for_stack_ready 220 10 overcloud
Command output matched 'CREATE_FAILED'. Exiting...

Here is the heat log:


2014-07-18 06:51:11.884 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:12.921 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:16.058 30750 ERROR heat.engine.resource [-] CREATE : Server 
SwiftStorage0 [07e42c3d-0f1b-4bb9-b980-ffbb74ac770d] Stack overcloud 
[0ca028e7-682b-41ef-8af0-b2eb67bee272]
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Traceback (most recent 
call last):
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File 
/usr/lib/python2.7/site-packages/heat/engine/resource.py, line 420, in 
_do_action
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource while not 
check(handle_data):
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File 
/usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 545, 
in check_create_complete
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource return 
self._check_active(server)
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File 
/usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 561, 
in _check_active
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource raise exc
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Error: Creation of 
server overcloud-SwiftStorage0-qdjqbif6peva failed.
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource
2014-07-18 06:51:16.255 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:16.939 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:17.368 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:17.638 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:18.158 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:18.613 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:19.113 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:19.765 30750 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:20.247 30750 WARNING heat.engine.service [-] Stack create 
failed, status FAILED

How can I resolve this?

 Thanks,
Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Embrane Plugin/CI status

2014-07-21 Thread Anita Kuno
On 07/20/2014 07:19 PM, Ignacio Scopetta wrote:
 Hello everybody,
 
 Given the removal of the ovs core plugin, the embrane plugin was updated to 
 use ml2 as a dependency[0],
 Because of this the CI will not be voting on other changesets until [0] gets 
 merged.
 
 Regards,
 Ignacio
 
 [0]https://review.openstack.org/#/c/108226/
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Let's try to reply to the master thread when a call for ci status thread
is started.

For example, this thread would be the one you should reply to in this
case:
http://lists.openstack.org/pipermail/openstack-dev/2014-July/040062.html

Starting a thread for every update on your system is too much noise for
subscribers.

We will soon have other means for updating your ci status. Please attend
the third-party meetings:
https://wiki.openstack.org/wiki/Meetings/ThirdParty and read the meeting
logs: http://eavesdrop.openstack.org/meetings/third_party/ to stay informed.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Virtio-scsi settings nova-specs exception

2014-07-21 Thread Daniel P. Berrange
On Mon, Jul 21, 2014 at 10:21:20AM +1000, Michael Still wrote:
 I just want to check my understanding -- it seems to me that this
 depends on a feature that's very new to libvirt (merged there 22 May
 2014). Is that right?
 
 http://libvirt.org/git/?p=libvirt.git;a=commit;h=d950494129513558a303387e26a2bab057012c5e
 
 We've had some concerns about adding features to the libvirt driver
 for features represented only in very new versions of libvirt.
 https://review.openstack.org/#/c/72038/ is an example. Now, its clear
 to me that we don't yet have a consensus on nova-core on how to handle
 features depending on very new libvirts. There are certainly CI
 concerns at the least.
 
 So, I think approving this exception is tied up in that whole debate.

We've already approved many other blueprints for Juno that involve features
from new libvirt, so I don't think it is credible to reject this or any
other feature that requires new libvirt in Juno.

Furthermore this proposal for Nova is a targetted feature which is not
enabled by default, so the risk of regression for people not using it
is negligible. So I see no reason not to accept this feature.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] Zuul trigger not starting Jenkins jobs

2014-07-21 Thread Steven Weston
On 7/20/14, 5:25 PM, daya kamath wrote:

all,
Need some pointers on debugging what the issue is. its not very convenient for 
me to be on the IRC due to timezone issues, so hoping the mailing list is a 
good next best option..
when i post a patch on the sandbox project, i see a review indicating my system 
is starting 'check' jobs, but i dont see any activity in Jenkins for the job. i 
can run the job manually from the master.

tia!
daya
-
output from review.openstack.org -


IBM Neutron Testing
Jul 14 3:33 PM
Patch Set 1:
Starting check jobs. http://127.0.0.1/zuul/status

output log from Zuul debug-
Paste #86642 | LodgeIt!http://paste.openstack.org/show/86642/












Paste #86642 | LodgeIt!http://paste.openstack.org/show/86642/
debug log - 2014-07-16 07:57:57,077 INFO zuul.Gerrit: Updating information for 
106722,1 2014-07-16 07:57:57,936 DEBUG zuul.Gerrit: Change Change 
0x7f0f4e5d64d0 106722,1 status: NEW 2014-07-16 07:57:57,936 DEBUG 
zuul.Scheduler: Adding trigger event: TriggerEvent...



View on paste.openstack.orghttp://paste.openstack.org/show/86642/

Preview by Yahoo






(configuration shows the job mapping properly, and its receiving the triggers 
from the upstram, but these are not firing any Jenkins jobs)

The Jenkins master connection to Gearman is showing status as ok.

gearman status command output -

status
build:noop-check-communication:master   0   0   2
build:dsvm-tempest-full 0   0   2
build:dsvm-tempest-full:devstack_slave  0   0   2
merger:merge0   0   1
build:ibm-dsvm-tempest-full 0   0   2
zuul:get_running_jobs   0   0   1
set_description:9.126.153.171   0   0   1
build:ibm-dsvm-tempest-full:devstack_slave  0   0   2
stop:9.126.153.171  0   0   1
zuul:promote0   0   1
build:noop-check-communication  0   0   2
zuul:enqueue0   0   1
merger:update   0   0   1




Hi Daya,

I did ping you back in IRC last week; however you, unfortunately had already 
signed off.  I have tried to ping you several times since, but every time I 
have checked you have not been online.

In my experience, this issue has been caused by a mismatch in the jobs 
configured in the Zuul pipelines and those configured in Jenkins.  Can you post 
your Jenkins jobs builder files (your projects.yaml file and the yaml file 
which you defined the ibm-dsvm-dempest-full job in?  Also, please post your 
zuul.conf file and your layout.yaml files as well.

Please feel free to follow up with me at 
swes...@brocade.commailto:swes...@brocade.com.  I will be happy to continue 
our discussion over email.

Thanks,
Steve Weston
OpenStack Software Engineer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a note on Spec Approval Deadline

2014-07-21 Thread Yuriy Taraday
Hello, Kyle.

As I can see, my spec got left behind. Should I give up any hope and move
it to Kilo dir?


On Mon, Jul 14, 2014 at 3:24 PM, Miguel Angel Ajo Pelayo 
mangel...@redhat.com wrote:

 The oslo-rootwrap spec counterpart of this
 spec has been approved:

 https://review.openstack.org/#/c/94613/

 Cheers :-)

 - Original Message -
  Yurly, thanks for your spec and code! I'll sync with Carl tomorrow on
 this
  and see how we can proceed for Juno around this.
 
 
  On Sat, Jul 12, 2014 at 10:00 AM, Carl Baldwin  c...@ecbaldwin.net 
 wrote:
 
 
 
 
  +1 This spec had already been proposed quite some time ago. I'd like to
 see
  this work get in to juno.
 
  Carl
  On Jul 12, 2014 9:53 AM, Yuriy Taraday  yorik@gmail.com  wrote:
 
 
 
  Hello, Kyle.
 
  On Fri, Jul 11, 2014 at 6:18 PM, Kyle Mestery 
 mest...@noironetworks.com 
  wrote:
 
 
  Just a note that yesterday we passed SPD for Neutron. We have a
  healthy backlog of specs, and I'm working to go through this list and
  make some final approvals for Juno-3 over the next week. If you've
  submitted a spec which is in review, please hang tight while myself
  and the rest of the neutron cores review these. It's likely a good
  portion of the proposed specs may end up as deferred until K
  release, given where we're at in the Juno cycle now.
 
  Thanks!
  Kyle
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  Please don't skip my spec on rootwrap daemon support:
  https://review.openstack.org/#/c/93889/
  It got -2'd my Mark McClain when my spec in oslo wasn't approved but now
  that's fixed but it's not easy to get hold of Mark.
  Code for that spec (also -2'd by Mark) is close to be finished and
 requires
  some discussion to get merged by Juno-3.
 
  --
 
  Kind regards, Yuriy.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Add a hacking check to not use Python Source Code Encodings (PEP0263)

2014-07-21 Thread Christian Berendt
Hello.

There are some files using the Python source code encodings as the first
line. That's normally not necessary and I want propose to introduce a
hacking check to check for the absence of the source code encodings.

Best, Christian.

-- 
Christian Berendt
Cloud Computing Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-21 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On 21/07/14 04:53, Angus Lees wrote:
 Status, as I understand it:
 
 * oslo.db changes to support other mysql drivers:
 
 https://review.openstack.org/#/c/104425/  (merged) 
 https://review.openstack.org/#/c/106928/  (awaiting oslo.db
 review) https://review.openstack.org/#/c/107221/  (awaiting oslo.db
 review)

For that last one, the idea is correct, but the implementation is
wrong, see my comments in the review.

 
 (These are sufficient to allow operators to switch connection
 strings and use mysqlconnector.  The rest is all for our testing
 environment)
 
 * oslo.db change to allow testing with other mysql drivers:
 
 https://review.openstack.org/#/c/104428/  (awaiting oslo.db
 review) https://review.openstack.org/#/c/104447/  (awaiting oslo.db
 review. Ongoing discussion towards a larger rewrite of oslo.db
 testing instead)
 
 * Integration into jenkins environment:
 
 Blocked on getting Oracle to distribute mysql-connector via pypi. 
 Ihar and others are having conversations with the upstream author.
 
 * Devstack change to switch to mysqlconnector for neutron:
 
 https://review.openstack.org/#/c/105209/  (marked wip) Ihar: do you
 want me to pick this up, or are you going to continue it once some
 of the above has settled?

This is in WIP because it's not clear now whether the switch is
expected to be global or local to neutron. I'll make sure it's covered
if/when spec is approved.

 
 * oslo.db gate test that reproduces the deadlock with eventlet:
 
 https://review.openstack.org/#/c/104436/  (In review.  Can't be 
 submitted until gate environment is switched to mysqlconnector)
 

+ performance is yet to be benchmarked for different projects.

 
 Overall I'm not happy with the rate of change - but we're getting
 there.

That's Openstack! Changes take time here.

 I look forward to getting this fixed :/
 

Thanks for tracking oslo.db part of that, I really appreciate that.

 
 On 18 July 2014 21:45, Ihar Hrachyshka ihrac...@redhat.com 
 mailto:ihrac...@redhat.com wrote:
 
 On 14/07/14 17:03, Ihar Hrachyshka wrote:
 On 14/07/14 15:54, Clark Boylan wrote:
 On Sun, Jul 13, 2014 at 9:20 AM, Ihar Hrachyshka 
 ihrac...@redhat.com mailto:ihrac...@redhat.com wrote: On
 11/07/14 19:20, Clark Boylan
 wrote:
 Before we get too far ahead of ourselves mysql-connector 
 is not hosted on pypi. Instead it is an external package 
 link. We recently managed to remove all packages that
 are hosted as external package links from openstack and
 will not add new ones in. Before we can use
 mysql-connector in the gate oracle will need to publish
 mysql-connector on pypi properly.
 
 There is misunderstanding in our community on how we deploy db 
 client modules. No project actually depends on any of them. We 
 assume deployers will install the proper one and configure 
 'connection' string to use it. In case of devstack, we install 
 the appropriate package from distribution packages, not pip.
 
 Correct, but for all of the other test suites (unittests) we 
 install the db clients via pip because tox runs them and 
 virtualenvs allowing site packages cause too many problems.
 See
 
 
 https://git.openstack.org/cgit/openstack/nova/tree/test-requirements.txt#n8.



 
 
 So we do actually depend on these things being pip installable.
 Basically this allows devs to run `tox` and it works.
 
 Roger that, and thanks for clarification. I'm trying to reach
 the author and the maintainer of mysqlconnector-python to see
 whether I'll be able to convince him to publish the packages on 
 pypi.python.org http://pypi.python.org.
 
 
 I've reached the maintainer of the module, he told me he is
 currently working on uploading releases directly to
 pypi.python.org http://pypi.python.org.
 
 
 I would argue that we should have devstack install via pip
 too for consistency, but that is a different issue (it is
 already installing all of the other python dependencies this
 way so why special case?).
 
 What we do is recommending a module for our users in our 
 documentation.
 
 That said, I assume the gate is a non-issue. Correct?
 
 
 That said there is at least one other pure python 
 alternative, PyMySQL. PyMySQL supports py3k and pypy. We 
 should look at using PyMySQL instead if we want to start 
 with a reasonable path to getting this in the gate.
 
 MySQL Connector supports py3k too (not sure about pypy
 though).
 
 
 Clark
 
 On Fri, Jul 11, 2014 at 10:07 AM, Miguel Angel Ajo
 Pelayo mangel...@redhat.com
 mailto:mangel...@redhat.com wrote:
 +1 here too,
 
 Amazed with the performance gains, x2.4 seems a lot,
 and we'd get rid of deadlocks.
 
 
 
 - Original Message -
 +1
 
 I'm pretty excited about the possibilities here.
 I've had this mysqldb/eventlet contention in the back
 of my mind for some time now. I'm glad to see some
 work being done in this area.
 
 Carl
 
 On Fri, Jul 11, 2014 at 7:04 AM, Ihar Hrachyshka 
 ihrac...@redhat.com mailto:ihrac...@redhat.com
 wrote:
 

[openstack-dev] [nova] Nested Quota Driver API blue print

2014-07-21 Thread Ulrich Schwickerath

Hi, all,

we'd like to ask for an exception for our blue print

https://review.openstack.org/#/c/102201

on the Nested Quota Driver API. It has been marked as abandoned on 
Wednesday last week. Development of it is well advanced now, as well as 
the related keystone parts, and for that reason we would like to ask for 
an exception to have it still considered for Juno. Sajeesh who owns the 
blue print is currently away for personal reasons without internet 
access but will be back within the next day.


Can you please let us know which steps are required from our part to get 
that exception ?


Thanks a lot and kind regards,
Ulrich



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec freeze exception] Online Schema Changes

2014-07-21 Thread John Garbutt
On 19 July 2014 03:53, Johannes Erdfelt johan...@erdfelt.com wrote:
 I'm requestion a spec freeze exception for online schema changes.

 https://review.openstack.org/102545

 This work is being done to try to minimize the downtime as part of
 upgrades. Database migrations have historically been a source of long
 periods of downtime. The spec is an attempt to start optimizing this
 part by allowing deployers to perform most schema changes online, while
 Nova is running.

Improving upgrades is high priority, and I feel it will help reduce
the amount of downtime required when performing database migrations.

So I am happy to sponsor this.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec freeze exception] Instance rescue support in Hyper-V

2014-07-21 Thread John Garbutt
On 19 July 2014 00:56, Alessandro Pilotti
apilo...@cloudbasesolutions.com wrote:
 Hi everyone,

 I’d like to propose the following driver feature parity blueprint spec for an 
 expection:

 https://review.openstack.org/#/c/105042

 This blueprint introduces rescue instance support in the Nova Hyper-V
 driver for feature parity with other drivers.

 The Hyper-V Nova driver is currently not supporting Nova rescue commands,
 unlike other hypervisor drivers (e.g. libvirt).

 The driver can be extended to support the rescue feature,
 supporting both Linux and Windows images.

 Hyper-V uses VHD/VHDX images, not AMI/AKI/ARI. The Nova rescue command will
 result in a new temporary image spawned using the same image as the instance 
 to
 be rescued, attaching the root disk of the original image as secondary local
 disk.

 The unrescue command will result in the temporary instance being deleted and
 the original instance restarted.

I think this helps make the hypervisor drivers more consistent.

Thats important to our users, so I am happy to sponsor this blueprint.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec Freeze Exception] [Gantt] Scheduler Isolate DB spec

2014-07-21 Thread John Garbutt
On 18 July 2014 09:10, Sylvain Bauza sba...@redhat.com wrote:
 Hi team,

 I would like to put your attention on https://review.openstack.org/89893
 This spec targets to isolate access within the filters to only Scheduler
 bits. This one is a prerequisite for a possible split of the scheduler
 into a separate project named Gantt, as it's necessary to remove direct
 access to other Nova objects (like aggregates and instances).

 This spec is one of the oldest specs so far, but its approval has been
 delayed because there were other concerns to discuss first about how we
 split the scheduler. Now that these concerns have been addressed, it is
 time for going back to that blueprint and iterate over it.

 I understand the exception is for a window of 7 days. In my opinion,
 this objective is targetable as now all the pieces are there for making
 a consensus.

 The change by itself is only a refactoring of the existing code with no
 impact on APIs neither on DB scheme, so IMHO this blueprint is a good
 opportunity for being on track with the objective of a split by
 beginning of Kilo.

 Cores, I leave you appreciate the urgency and I'm available by IRC or
 email for answering questions.

Regardless of Gantt, tidying up the data dependencies here make sense.

I feel we need to consider how the above works with upgrades.

I am happy to sponsor this blueprint. Although I worry we might not
get agreement in time.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec freeze exception] Instance tasks

2014-07-21 Thread John Garbutt
On 18 July 2014 14:28, Andrew Laski andrew.la...@rackspace.com wrote:
 Hello everybody,

 I would like to request a spec proposal extension for instance tasks,
 described in https://review.openstack.org/#/c/86938/ .  This has been a long
 discussed and awaited feature with a lot of support from the community.

 This feature has been intertwined with the fate of the V3 API, which is
 still being worked out, and may not be completed in Juno.  This means that I
 lack confidence that the tasks API work can be fully completed in Juno as
 well.  But there is more to the tasks work than just the API, and I would
 like to get some of that groundwork done. In fact, one of the challenges
 with the task work is how to handle an upgrade situation with an API that
 exposes tasks and computes which are not task aware and therefore don't
 update them properly. If it's acceptable I would propose stripping the API
 portion of the spec for now and focus on getting Juno computes to be task
 aware so that tasks exposed in the Kilo API would be handled properly with
 Juno computes.  This of course assumes that we're reasonably confident we
 want to add tasks to the API in Kilo.

I see better task handling as a key to better organising the error
handling inside Nova, and improving stability.

As such I am happy to sponsor this spec.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] Zuul trigger not starting Jenkins jobs

2014-07-21 Thread daya kamath
hi steve,
thanks a lot for following up! i'm based out of india, so there's not much 
overlap in timezones. i'll unicast you for next steps. wanted to post the info 
you asked for in this thread.
the 2 files are here - Paste #87382 | LodgeIt!
  
          
Paste #87382 | LodgeIt!
examples.yaml -
 - job-template: name: 
'noop-check-communication' node: '{node}' builders: - shell: | #!/bin/bash -xe 
echo Hello world, this is the {vendor} Testing System - job-tem...  
View on paste.openstack.org Preview by Yahoo  
  

i just have some customizations to devstack-gate script, but the overall 
framework more or less intact as cloned from 
https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_master.sh. 
not using nodepools currently, just 1 master and 1 slave node.

thanks!




 From: Steven Weston swes...@brocade.com
To: daya kamath day...@yahoo.com; OpenStack Development Mailing List (not for 
usage questions) openstack-dev@lists.openstack.org 
Sent: Monday, July 21, 2014 1:53 PM
Subject: Re: [openstack-dev] [third-party] Zuul trigger not starting Jenkins 
jobs
 


On 7/20/14, 5:25 PM, daya kamath wrote:



all,
Need some pointers on debugging what the issue is. its not very convenient for 
me to be on the IRC due to timezone issues, so hoping the mailing list is a 
good next best option..
when i post a patch on the sandbox project, i see a review indicating my 
system is starting 'check' jobs, but i dont see any activity in Jenkins for 
the job. i can run the job manually from the master.


tia!
daya
-
output from review.openstack.org -


 IBM Neutron Testing 
 Jul 14 3:33 PM 
Patch Set 1:
Starting check jobs. http://127.0.0.1/zuul/status


output log from Zuul debug-
Paste #86642 | LodgeIt!

  
          
Paste #86642 | LodgeIt!
debug log - 2014-07-16 07:57:57,077 INFO zuul.Gerrit: Updating information for 
106722,1 2014-07-16 07:57:57,936 DEBUG zuul.Gerrit: Change Change 
0x7f0f4e5d64d0 106722,1 status: NEW 2014-07-16 07:57:57,936 DEBUG 
zuul.Scheduler: Adding trigger event: TriggerEvent... 

 
View on paste.openstack.org Preview by Yahoo 

 
  


(configuration shows the job mapping properly, and its receiving the triggers 
from the upstram, but these are not firing any Jenkins jobs)


The Jenkins master connection to Gearman is showing status as ok. 


gearman status command output -


status
build:noop-check-communication:master   0       0       2
build:dsvm-tempest-full 0       0       2
build:dsvm-tempest-full:devstack_slave  0       0       2
merger:merge    0       0       1
build:ibm-dsvm-tempest-full     0       0       2
zuul:get_running_jobs   0       0       1
set_description:9.126.153.171   0       0       1
build:ibm-dsvm-tempest-full:devstack_slave      0       0       2
stop:9.126.153.171      0       0       1
zuul:promote    0       0       1
build:noop-check-communication  0       0       2
zuul:enqueue    0       0       1
merger:update   0       0       1






Hi Daya,

I did ping you back in IRC last week; however you, unfortunately had
already signed off.  I have tried to ping you several times since,
but every time I have checked you have not been online.

In my experience, this issue has been caused by a mismatch in the
jobs configured in the Zuul pipelines and those configured in
Jenkins.  Can you post your Jenkins jobs builder files (your
projects.yaml file and the yaml file which you defined the
ibm-dsvm-dempest-full job in?  Also, please post your zuul.conf file
and your layout.yaml files as well.

Please feel free to follow up with me at swes...@brocade.com.  I will be happy 
to continue our discussion over email.

Thanks,
Steve Weston
OpenStack Software Engineer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

2014-07-21 Thread Duncan Thomas
The iSCSI lun won't be set up until you try to attach the volume

On 17 July 2014 12:44, Johnson Cheng johnson.ch...@qsantechnology.com wrote:
 Dear All,



 I installed iSCSI target at my controller node (IP: 192.168.106.20),

 #iscsitarget open-iscsi iscsitarget-dkms



 then modify my cinder.conf at controller node as below,

 [DEFAULT]

 rootwrap_config = /etc/cinder/rootwrap.conf

 api_paste_confg = /etc/cinder/api-paste.ini

 #iscsi_helper = tgtadm

 iscsi_helper = ietadm

 volume_name_template = volume-%s

 volume_group = cinder-volumes

 verbose = True

 auth_strategy = keystone

 #state_path = /var/lib/cinder

 #lock_path = /var/lock/cinder

 #volumes_dir = /var/lib/cinder/volumes

 iscsi_ip_address=192.168.106.20



 rpc_backend = cinder.openstack.common.rpc.impl_kombu

 rabbit_host = controller

 rabbit_port = 5672

 rabbit_userid = guest

 rabbit_password = demo



 glance_host = controller



 enabled_backends=lvmdriver-1,lvmdriver-2

 [lvmdriver-1]

 volume_group=cinder-volumes-1

 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

 volume_backend_name=LVM_iSCSI

 [lvmdriver-2]

 volume_group=cinder-volumes-2

 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

 volume_backend_name=LVM_iSCSI_b

 [database]

 connection = mysql://cinder:demo@controller/cinder



 [keystone_authtoken]

 auth_uri = http://controller:5000

 auth_host = controller

 auth_port = 35357

 auth_protocol = http

 admin_tenant_name = service

 admin_user = cinder

 admin_password = demo



 Now I use the following command to create a cinder volume, and it can be
 created successfully.

 # cinder create --volume-type lvm_controller --display-name vol 1



 Unfortunately it seems not attach to a iSCSI LUN automatically because I can
 not discover it from iSCSI initiator,

 # iscsiadm -m discovery -t st -p 192.168.106.20



 Do I miss something?





 Regards,

 Johnson





 From: Manickam, Kanagaraj [mailto:kanagaraj.manic...@hp.com]
 Sent: Thursday, July 17, 2014 1:19 PM


 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question



 I think, It should be on the cinder node which is usually deployed on the
 controller node



 From: Johnson Cheng [mailto:johnson.ch...@qsantechnology.com]
 Sent: Thursday, July 17, 2014 10:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Cinder] Integrated with iSCSI target Question



 Dear All,



 I have three nodes, a controller node and two compute nodes(volume node).

 The default value for iscsi_helper in cinder.conf is “tgtadm”, I will change
 to “ietadm” to integrate with iSCSI target.

 Unfortunately I am not sure that iscsitarget should be installed at
 controller node or compute node?

 Have any reference?





 Regards,

 Johnson




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] how scheduler handle messages?

2014-07-21 Thread fdsafdsafd
Hello,
   recently, i use rally to test boot-and-delete. I thought that one 
nova-scheduler will handle message sent to it one by one, but the log print 
show differences. So Can some one how nova-scheduler handle messages? I read 
the code in nova.service,  and found that one service will create fanout 
consumer, and that all fanout message consumed in one thread. So I wonder that, 
How the nova-scheduler handle message, if there are many messages casted to 
call scheduler's run_instance?
Thanks a lot.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] The gate: a failure analysis

2014-07-21 Thread Matthew Booth
On Friday evening I had a dependent series of 5 changes all with
approval waiting to be merged. These were all refactor changes in the
VMware driver. The changes were:

* VMware: DatastorePath join() and __eq__()
https://review.openstack.org/#/c/103949/

* VMware: use datastore classes get_allowed_datastores/_sub_folder
https://review.openstack.org/#/c/103950/

* VMware: use datastore classes in file_move/delete/exists, mkdir
https://review.openstack.org/#/c/103951/

* VMware: Trivial indentation cleanups in vmops
https://review.openstack.org/#/c/104149/

* VMware: Convert vmops to use instance as an object
https://review.openstack.org/#/c/104144/

The last change merged this morning.

In order to merge these changes, over the weekend I manually submitted:

* 35 rechecks due to false negatives, an average of 7 per change
* 19 resubmissions after a change passed, but its dependency did not

Other interesting numbers:

* 16 unique bugs
* An 87% false negative rate
* 0 bugs found in the change under test

Because we don't fail fast, that is an average of at least 7.3 hours in
the gate. Much more in fact, because some runs fail on the second pass,
not the first. Because we don't resubmit automatically, that is only if
a developer is actively monitoring the process continuously, and
resubmits immediately on failure. In practise this is much longer,
because sometimes we have to sleep.

All of the above numbers are counted from the change receiving an
approval +2 until final merging. There were far more failures than this
during the approval process.

Why do we test individual changes in the gate? The purpose is to find
errors *in the change under test*. By the above numbers, it has failed
to achieve this at least 16 times previously.

Probability of finding a bug in the change under test: Small
Cost of testing:   High
Opportunity cost of slowing development:   High

and for comparison:

Cost of reverting rare false positives:Small

The current process expends a lot of resources, and does not achieve its
goal of finding bugs *in the changes under test*. In addition to using a
lot of technical resources, it also prevents good change from making its
way into the project and, not unimportantly, saps the will to live of
its victims. The cost of the process is overwhelmingly greater than its
benefits. The gate process as it stands is a significant net negative to
the project.

Does this mean that it is worthless to run these tests? Absolutely not!
These tests are vital to highlight a severe quality deficiency in
OpenStack. Not addressing this is, imho, an existential risk to the
project. However, the current approach is to pick contributors from the
community at random and hold them personally responsible for project
bugs selected at random. Not only has this approach failed, it is
impractical, unreasonable, and poisonous to the community at large. It
is also unrelated to the purpose of gate testing, which is to find bugs
*in the changes under test*.

I would like to make the radical proposal that we stop gating on CI
failures. We will continue to run them on every change, but only after
the change has been successfully merged.

Benefits:
* Without rechecks, the gate will use 8 times fewer resources.
* Log analysis is still available to indicate the emergence of races.
* Fixes can be merged quicker.
* Vastly less developer time spent monitoring gate failures.

Costs:
* A rare class of merge bug will make it into master.

Note that the benefits above will also offset the cost of resolving this
rare class of merge bug.

Of course, we still have the problem of finding resources to monitor and
fix CI failures. An additional benefit of not gating on CI will be that
we can no longer pretend that picking developers for project-affecting
bugs by lottery is likely to achieve results. As a project we need to
understand the importance of CI failures. We need a proper negotiation
with contributors to staff a team dedicated to the problem. We can then
use the review process to ensure that the right people have an incentive
to prioritise bug fixes.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] multiple backend issue

2014-07-21 Thread Johnson Cheng
Dear git-harry,

You are right. This issue was solved.

Thanks for your help.


Regards,
Johnson

-Original Message-
From: git harry [mailto:git-ha...@live.co.uk] 
Sent: Saturday, July 19, 2014 4:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] multiple backend issue

Ah, okay I misunderstood. It looks like you've used the same config file on 
both the controller and compute nodes, notice how the output of cinder-manage 
gives you hosts corresponding to both backends on your two nodes.

 controller@lvmdriver-2 nova
 controller@lvmdriver-1 nova
 Compute@lvmdriver-1 nova
 Compute@lvmdriver-2 nova

Each cinder-volume service you are running has tried to setup both backends 
even though only one of the volume groups is available to them. The 
enabled_backends should correspond to what that particular cinder-volume 
service is responsible for and you only need to specify the backend 
configuration groups that that specific volume group will use.

controller:


enabled_backends=lvmdriver-1

[lvmdriver-1]

volume_group=cinder-volumes-1

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI


compute:


enabled_backends=lvmdriver-2

[lvmdriver-2]

volume_group=cinder-volumes-2

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI_b


 From: johnson.ch...@qsantechnology.com
 To: openstack-dev@lists.openstack.org
 Date: Fri, 18 Jul 2014 16:33:10 +
 Subject: Re: [openstack-dev] [Cinder] multiple backend issue

 Dear git-harry,

 My confuse is why I can successfully create volume on both controller node 
 and compute node, but it still has error message in cinder-volume.log?

 The below is my environment,
 Controller node:
 Install cinder-api, cinder-schedule, cinder-volume Create 
 cinder-volume-1 volume group Compute node:
 Install cinder-volume
 Create cinder-volume-2 volume group

 The below is the output of cinder extra-specs-list,
 +--++--+
 | ID | Name | extra_specs |
 +--++--+
 | 30faffa9-7955-484f-9c96-3f40507aa62e | lvm_compute | 
 | {u'volume_backend_name': u'LVM_iSCSI_b'} | 
 | c2341962-b15e-4003-882f-08a8a36d3a0f | lvm_controller | 
 | {u'volume_backend_name': u'LVM_iSCSI'} |
 +--++--+

 The below is the output of  cinder-manage host list
 host zone
 controller nova
 Compute nova
 controller@lvmdriver-2 nova
 controller@lvmdriver-1 nova
 Compute@lvmdriver-1 nova
 Compute@lvmdriver-2 nova

 So I just make sure if everything is right at my environment.

 Regards,
 Johnson


 -Original Message-
 From: git harry [mailto:git-ha...@live.co.uk]
 Sent: Friday, July 18, 2014 4:08 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Cinder] multiple backend issue

 I don't know what you mean by side effects, if the fact that it (lvmdriver-2) 
 doesn't work is not a problem for you. You will also continue to get entries 
 in the log informing you the driver is uninitialised.

 The volume group needs to be on the same host as the cinder-volume service - 
 so it sounds like the service is running on your controller only. If you want 
 to locate volumes on the compute host you will need to install the service 
 there.


 
 From: johnson.ch...@qsantechnology.com
 To: openstack-dev@lists.openstack.org
 Date: Thu, 17 Jul 2014 15:39:40 +
 Subject: Re: [openstack-dev] [Cinder] multiple backend issue

 Dear git-harry,

 I have created a volume group cinder-volume-1 at my controller node, and 
 another volume group cinder-volume-2 at my compute node.

 I can create volume successfully on dedicated backend.
 Of course I can ignore the error message, but I have to know if any 
 side-effect?

 Regards,
 Johnson

 -Original Message-
 From: git harry [mailto:git-ha...@live.co.uk]
 Sent: Thursday, July 17, 2014 7:32 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Cinder] multiple backend issue

 You are using multibackend but it appears you haven't created both volume 
 groups:

 Stderr: ' Volume group cinder-volumes-2 not found\n'

 If you can create volumes it suggest the other backend is correctly 
 configured. So you can ignore the error if you want but you will not be able 
 to use the second backend you have attempted to setup.

 
 From: johnson.ch...@qsantechnology.com
 To: openstack-dev@lists.openstack.org
 Date: Thu, 17 Jul 2014 11:03:41 +
 Subject: [openstack-dev] [Cinder] multiple backend issue


 Dear All,



 I have two machines as below,

 Machine1 (192.168.106.20): controller node 

[openstack-dev] [nova] nova list Question

2014-07-21 Thread Johnson Cheng
Dear All,

When I setup a openstack node, will it have the output of nova list?

Here is my output of nova image-list,
+--+-+++
| ID   | Name| Status | Server |
+--+-+++
| e22d8a77-d3ad-458a-a073-aea8b185be22 | cirros-0.3.2-x86_64 | SAVING ||
+--+-+++

But the output of nova list is empty,
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+

Is it correct?
I want to use it to attach a cinder volume.


Regards,
Johnson

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

2014-07-21 Thread Johnson Cheng
Dear Thomas,

Thanks for your reply.
So when I attach volume manually, will iSCSI LUN be automatically setup via 
cinder.conf (iscsi_helper and iscsi_ip_address)?


Regards,
Johnson

-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Monday, July 21, 2014 6:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

The iSCSI lun won't be set up until you try to attach the volume

On 17 July 2014 12:44, Johnson Cheng johnson.ch...@qsantechnology.com wrote:
 Dear All,



 I installed iSCSI target at my controller node (IP: 192.168.106.20),

 #iscsitarget open-iscsi iscsitarget-dkms



 then modify my cinder.conf at controller node as below,

 [DEFAULT]

 rootwrap_config = /etc/cinder/rootwrap.conf

 api_paste_confg = /etc/cinder/api-paste.ini

 #iscsi_helper = tgtadm

 iscsi_helper = ietadm

 volume_name_template = volume-%s

 volume_group = cinder-volumes

 verbose = True

 auth_strategy = keystone

 #state_path = /var/lib/cinder

 #lock_path = /var/lock/cinder

 #volumes_dir = /var/lib/cinder/volumes

 iscsi_ip_address=192.168.106.20



 rpc_backend = cinder.openstack.common.rpc.impl_kombu

 rabbit_host = controller

 rabbit_port = 5672

 rabbit_userid = guest

 rabbit_password = demo



 glance_host = controller



 enabled_backends=lvmdriver-1,lvmdriver-2

 [lvmdriver-1]

 volume_group=cinder-volumes-1

 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

 volume_backend_name=LVM_iSCSI

 [lvmdriver-2]

 volume_group=cinder-volumes-2

 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

 volume_backend_name=LVM_iSCSI_b

 [database]

 connection = mysql://cinder:demo@controller/cinder



 [keystone_authtoken]

 auth_uri = http://controller:5000

 auth_host = controller

 auth_port = 35357

 auth_protocol = http

 admin_tenant_name = service

 admin_user = cinder

 admin_password = demo



 Now I use the following command to create a cinder volume, and it can 
 be created successfully.

 # cinder create --volume-type lvm_controller --display-name vol 1



 Unfortunately it seems not attach to a iSCSI LUN automatically because 
 I can not discover it from iSCSI initiator,

 # iscsiadm -m discovery -t st -p 192.168.106.20



 Do I miss something?





 Regards,

 Johnson





 From: Manickam, Kanagaraj [mailto:kanagaraj.manic...@hp.com]
 Sent: Thursday, July 17, 2014 1:19 PM


 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target 
 Question



 I think, It should be on the cinder node which is usually deployed on 
 the controller node



 From: Johnson Cheng [mailto:johnson.ch...@qsantechnology.com]
 Sent: Thursday, July 17, 2014 10:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Cinder] Integrated with iSCSI target 
 Question



 Dear All,



 I have three nodes, a controller node and two compute nodes(volume node).

 The default value for iscsi_helper in cinder.conf is “tgtadm”, I will 
 change to “ietadm” to integrate with iSCSI target.

 Unfortunately I am not sure that iscsitarget should be installed at 
 controller node or compute node?

 Have any reference?





 Regards,

 Johnson




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TROVE] Guest prepare call polling mechanism issue

2014-07-21 Thread Denis Makogon
Hello Stackers.


I’d like to discuss raised issue related to Trove-guestagent prepare call
polling mechanism issue (see [1]).

Let me first describe why this is actually an issue and why it should be
fixed. For those of you who is familiar with Trove knows that Trove can
provision instances through Nova API and Heat API (see [2] and see [3]).



What’s the difference between this two ways (in general)? The answer is
simple:

- Heat-based provisioning method has polling mechanism that verifies that
stack provisioning was completed with successful state (see [4]) which
means that all stack resources are in ACTIVE state.

- Nova-based provisioning method doesn’t do any polling (which is wrong,
since instance can’t fail as fast as possible because Trove-taskmanager
service doesn’t verify that launched server had reached ACTIVE state.
That’s the issue #1 - compute instance state is unknown, but right after
resources (deliverd by heat) already in ACTIVE states.

Once one method [2] or [3] finished, taskmanager trying to prepare data for
guest (see [5]) and then it tries to send prepare call to guest (see [6]).
Here comes issue #2 - polling mechanism does at least 100 API calls to Nova
to define compute instance status.

Also taskmanager does almost the same amount of calls to Trove backend to
discover guest status which is totally normal.

So, here comes the question,  why should i call 99 times Nova for the
same value if the value asked for the first time was completely ok?



There’s only one way to fix it. Since heat-based provisioning delivers
instance with status validation procedure, the same thing should be done
for nova-base provisioning (we should extract compute instance status
polling from guest prepare polling mechanism and integrate it into [2]) and
leave only guest status discovering in guest prepare polling mechanism.




Benefits? Proposed fix will give an ability for fast-failing for corrupted
instances, it would reduce amount of redundant Nova API calls while
attempting to discover guest status.


Proposed fix for this issue - [7].

[1] - https://launchpad.net/bugs/1325512

[2] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215

[3] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197

[4] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429

[5] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256

[6] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266

[7] - https://review.openstack.org/#/c/97194/


Thoughts?

Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Kyle Mestery
On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 I would like to propose the following for spec freeze exception:

 https://review.openstack.org/#/c/105369

 This is an umbrella spec for a number of VMware DVS support specs. Each has
 its own unique use case and will enable a lot of existing VMware DVS users
 to start to use OpenStack.

 For https://review.openstack.org/#/c/102720/ we have the following which we
 can post when the internal CI for the NSX-v is ready (we are currently
 working on this):
  - core plugin functionality
  - layer 3 support
  - security group support

Do we need to approve all the under the umbrella specs as well?

 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a note on Spec Approval Deadline

2014-07-21 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 3:44 AM, Yuriy Taraday yorik@gmail.com wrote:
 Hello, Kyle.

 As I can see, my spec got left behind. Should I give up any hope and move it
 to Kilo dir?

Hi Yuriy:

This spec was on the radar to get an exception. Mark was on vacation
last week and he had a -2 on it, so I couldn't get that removed. I'll
sync with Mark today and reply back on this thread once that happens
with the status. Hang in there!

Thanks!
Kyle


 On Mon, Jul 14, 2014 at 3:24 PM, Miguel Angel Ajo Pelayo
 mangel...@redhat.com wrote:

 The oslo-rootwrap spec counterpart of this
 spec has been approved:

 https://review.openstack.org/#/c/94613/

 Cheers :-)

 - Original Message -
  Yurly, thanks for your spec and code! I'll sync with Carl tomorrow on
  this
  and see how we can proceed for Juno around this.
 
 
  On Sat, Jul 12, 2014 at 10:00 AM, Carl Baldwin  c...@ecbaldwin.net 
  wrote:
 
 
 
 
  +1 This spec had already been proposed quite some time ago. I'd like to
  see
  this work get in to juno.
 
  Carl
  On Jul 12, 2014 9:53 AM, Yuriy Taraday  yorik@gmail.com  wrote:
 
 
 
  Hello, Kyle.
 
  On Fri, Jul 11, 2014 at 6:18 PM, Kyle Mestery 
  mest...@noironetworks.com 
  wrote:
 
 
  Just a note that yesterday we passed SPD for Neutron. We have a
  healthy backlog of specs, and I'm working to go through this list and
  make some final approvals for Juno-3 over the next week. If you've
  submitted a spec which is in review, please hang tight while myself
  and the rest of the neutron cores review these. It's likely a good
  portion of the proposed specs may end up as deferred until K
  release, given where we're at in the Juno cycle now.
 
  Thanks!
  Kyle
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  Please don't skip my spec on rootwrap daemon support:
  https://review.openstack.org/#/c/93889/
  It got -2'd my Mark McClain when my spec in oslo wasn't approved but now
  that's fixed but it's not easy to get hold of Mark.
  Code for that spec (also -2'd by Mark) is close to be finished and
  requires
  some discussion to get merged by Juno-3.
 
  --
 
  Kind regards, Yuriy.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Spec Approval Deadline (SAD) has passed, next steps

2014-07-21 Thread Kyle Mestery
Hi all!

A quick note that SAD has passed. We briskly approved a pile of BPs
over the weekend, most of them vendor related as low priority, best
effort attempts for Juno-3. At this point, we're hugely oversubscribed
for Juno-3, so it's unlikely we'll make exceptions for things into
Juno-3 now.

I don't plan to open a Kilo directory in the specs repository quite
yet. I'd like to first let things settle down a bit with Juno-3 before
going there. Once I do, specs which were not approved should be moved
to that directory where they can be reviewed with the idea they are
targeting Kilo instead of Juno.

Also, just a note that we have a handful of bugs and BPs we're trying
to land in Juno-3 yet today, so core reviewers, please focus on those
today.

Thanks!
Kyle

[1] https://launchpad.net/neutron/+milestone/juno-2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone/swift] role-based access cotrol in swift

2014-07-21 Thread Nassim Babaci
Hi, 

My answer is may be a little bite late but here's a swift middleware we have 
just published: https://github.com/cloudwatt/swiftpolicy 
it allows managing swift authorization using a policy.json file. 
It is based on the keystoneauth middleware, and uses oslo.policy file format.

Feel free to comment and/or to ask if any questions.

--
Nassim

- Mail original -
De: John Dickinson m...@not.mn
À: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Envoyé: Vendredi 11 Juillet 2014 05:33:13
Objet: Re: [openstack-dev] [keystone/swift] role-based access cotrol in swift

There are a couple of places to look to see the current dev effort in Swift 
around ACLs.

In no particular order:

* Supporting a service token in Swift https://review.openstack.org/#/c/105228/
* Adding policy engine support to Swift https://review.openstack.org/#/c/89568/
* Fixing ACLs to work with Keystone v3+ https://review.openstack.org/#/c/86430/

Some of the above may be in line with what you're looking for.

--John

On Jul 10, 2014, at 8:17 PM, Osanai, Hisashi osanai.hisa...@jp.fujitsu.com 
wrote:

 
 Hi, 
 
 I looked for info about role-based access control in swift because 
 I would like to prohibit PUT operations to containers like create 
 containers and set ACLs.
 
 Other services like Nova, Cinder have policy.json file but Swift doesn't.
 And I found out the following info.
 - Swift ACL's migration
 - Centralized policy management
 
 Do you have detail info for above?
 
 http://dolphm.com/openstack-juno-design-summit-outcomes-for-keystone/
 ---
 Migrate Swift ACL's from a highly flexible Tenant ID/Name basis, which worked 
 reasonably well against Identity API v2, to strictly be based on v3 Project 
 IDs. The driving requirement here is that Project Names are no longer 
 globally unique in v3, as they're only unique within a top-level domain.
 ---
 Centralized policy management
 Keystone currently provides an unused /v3/policies API that can be used to 
 centralize policy blob management across OpenStack.
 
 
 Best Regards,
 Hisashi Osanai
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repository update and the way forward

2014-07-21 Thread Carlos Gonçalves
On 12 Jun 2014, at 15:00, Carlos Gonçalves m...@cgoncalves.pt wrote:

 Is there any web page where all approved blueprints are being published to? 
 Jenkins builds such pages I’m looking for but they are linked to each 
 patchset individually (e.g., 
 http://docs-draft.openstack.org/77/92477/6/check/gate-neutron-specs-docs/f05cc1d/doc/build/html/).
  In addition, listing BPs currently under reviewing and linking to its 
 review.o.o page could potentially draw more attention/awareness to what’s 
 being proposed to Neutron (and other OpenStack projects).

Kyle? :-)

Thanks,
Carlos Goncalves___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repository update and the way forward

2014-07-21 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 8:33 AM, Carlos Gonçalves m...@cgoncalves.pt wrote:
 On 12 Jun 2014, at 15:00, Carlos Gonçalves m...@cgoncalves.pt wrote:

 Is there any web page where all approved blueprints are being published to?
 Jenkins builds such pages I’m looking for but they are linked to each
 patchset individually (e.g.,
 http://docs-draft.openstack.org/77/92477/6/check/gate-neutron-specs-docs/f05cc1d/doc/build/html/).
 In addition, listing BPs currently under reviewing and linking to its
 review.o.o page could potentially draw more attention/awareness to what’s
 being proposed to Neutron (and other OpenStack projects).


 Kyle? :-)

I don't know of a published page, but you can always look at the git
repository [1].

[1] http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno

 Thanks,
 Carlos Goncalves

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting reminder - 07/21/2014

2014-07-21 Thread Renat Akhmerov
Hi,

Keep in mind that we’ll have a team meeting today at #openstack-meeting at 
16.00 UTC.

Agenda:
Review action items
Current status (quickly by team members)
Further plans
Open discussion

You can also find it at https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
as well as the links to the previous meetings.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-21 Thread Steve Gordon
- Original Message -
 From: Brent Eagles beag...@redhat.com
 To: openstack-dev@lists.openstack.org
 
 Hi,
 
 A bug titled Creating quantum L2 networks (without subnets) doesn't
 work as expected (https://bugs.launchpad.net/nova/+bug/1039665) was
 reported quite some time ago. Beyond the discussion in the bug report,
 there have been related bugs reported a few times.
 
 * https://bugs.launchpad.net/nova/+bug/1304409
 * https://bugs.launchpad.net/nova/+bug/1252410
 * https://bugs.launchpad.net/nova/+bug/1237711
 * https://bugs.launchpad.net/nova/+bug/1311731
 * https://bugs.launchpad.net/nova/+bug/1043827
 
 BZs on this subject seem to have a hard time surviving. The get marked
 as incomplete or invalid, or in the related issues, the problem NOT
 related to the feature is addressed and the bug closed. We seem to dance
 around actually getting around to implementing this. The multiple
 reports show there *is* interest in this functionality but at the moment
 we are without an actual implementation.
 
 At the moment there are multiple related blueprints:

Following up with post SAD status:

 * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
   extension support

Remains unapproved, no negative feedback on current revision.

 * https://review.openstack.org/#/c/106222/ Add Port Security
   Implementation in ML2 Plugin

Has a -2 to highlight the significant overlap with 99873 above.

 * https://review.openstack.org/#/c/97715 NFV unaddressed interfaces

Remains unapproved, no negative feedback on current revision. 

Although there were some discussions about these last week I am not sure we 
reached consensus on whether either of these (or even both of them) are the 
correct path forward - particularly to address the problem Brent raised w.r.t. 
to creation of networks without subnets - I believe this currently still works 
with nova-network?

Regardless, I am wondering if either of the spec authors intend to propose 
these for a spec freeze exception?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-21 Thread Doug Hellmann


On Sun, Jul 20, 2014, at 04:35 PM, Jay S. Bryant wrote:
 Thanks Duncan and also Dolph, I should have made the question
 broader.  :-)
 
 On Wed, 2014-07-16 at 13:22 +0100, Duncan Thomas wrote:
  On 16 July 2014 03:57, Jay S. Bryant jsbry...@electronicjungle.net wrote:
   John,
  
   So you have said a few times that the specs are a learning process.
   What do you feel with have learned thus far using specs?
  
  I'm not John, but I'm going to answer as if you'd addressed the question 
  wider:
  - Specs can definitely help flesh out ideas and are much better than
  blueprints as a way of tracking concerns, questions, etc
  
 
 I feel I have better knowledge of what is being worked thanks to the
 specs.  This may partially be because I was also involved from the
 summit on for the first time.  They definitely are better for fleshing
 out ideas and discussing concerns.
 
  - We as a community are rather shy about making decisions as
  individuals, even low risk ones like 'Does this seem to require a
  spec' - if there doesn't seem to be value in a spec, don't do one
  unless somebody asks for one
 
 Agreed.  I think we all need to be less shy about making decisions and
 voicing them.  At least in Cinder.  :-)
 
  
  - Not all questions can be answered at spec time, sometimes you need
  to go bash out some code to see what works, then circle again
  
  - Careful planning reduces velocity. No significant evidence either
  way as to whether it improves quality, but my gut feeling is that it
  does. We need to figure out what tradeoffs on either scale we're happy
  to make, and perhaps that answer is different based on the area of
  code being touched and the date (e.g. a change that doesn't affect
  external APIs in J-1 might need less careful planning than a change in
  J-3. API changes or additions need more discussion and eyes on than
  none-API changes)
 
 I think, through this development cycle we are starting to narrow down
 what really needs a spec.  I think it would be good to perhaps have a
 Lessons Learned session at the K summit on the specs and try to better
 define expectations for use in the future.  I feel it has slowed, or at
 least focused development.  That has been good.

We should make that a cross-project session, so we can learn from what
the other projects did, too.

Doug

 
  
  - Specs are terrible for tracking work items, but no worse than blueprints
  
 Agreed.
  - Multiple people might choose to work on the same blueprint in
  parallel - this is going to happen, isn't necessarily rude and the
  correct solution to competing patches is entirely subjective
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add a hacking check to not use Python Source Code Encodings (PEP0263)

2014-07-21 Thread Doug Hellmann

On Jul 21, 2014, at 4:45 AM, Christian Berendt bere...@b1-systems.de wrote:

 Hello.
 
 There are some files using the Python source code encodings as the first
 line. That's normally not necessary and I want propose to introduce a
 hacking check to check for the absence of the source code encodings.

We need to be testing with Unicode inputs. Will the hacking check still support 
that case?

Doug

 
 Best, Christian.
 
 -- 
 Christian Berendt
 Cloud Computing Solution Architect
 Mail: bere...@b1-systems.de
 
 B1 Systems GmbH
 Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
 GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-21 Thread Chris Friesen

On 07/21/2014 04:38 AM, Matthew Booth wrote:


I would like to make the radical proposal that we stop gating on CI
failures. We will continue to run them on every change, but only after
the change has been successfully merged.

Benefits:
* Without rechecks, the gate will use 8 times fewer resources.
* Log analysis is still available to indicate the emergence of races.
* Fixes can be merged quicker.
* Vastly less developer time spent monitoring gate failures.

Costs:
* A rare class of merge bug will make it into master.

Note that the benefits above will also offset the cost of resolving this
rare class of merge bug.

Of course, we still have the problem of finding resources to monitor and
fix CI failures. An additional benefit of not gating on CI will be that
we can no longer pretend that picking developers for project-affecting
bugs by lottery is likely to achieve results. As a project we need to
understand the importance of CI failures. We need a proper negotiation
with contributors to staff a team dedicated to the problem. We can then
use the review process to ensure that the right people have an incentive
to prioritise bug fixes.


I'm generally in favour of this idea...I've only submitted a relatively 
small number of changes, but each time has involved gate bugs unrelated 
to the change being made.


Would there be value in doing unit tests at the time of submission?  We 
should all be doing this already, but it seems like it shouldn't be too 
expensive and might be reasonable insurance.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

2014-07-21 Thread Mooney, Sean K
Hi
I would like to propose  
https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost.rst for 
a spec freeze exception.

https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost

This blueprint adds support for the Intel(R) DPDK Userspace vHost
port binding to the Open Vswitch and Open Daylight ML2 Mechanism Drivers.

This blueprint enables nova changes tracked by the following spec:
https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-usvhost.rst

regards
sean
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] MyISAM as a default storage engine for MySQL in the gate

2014-07-21 Thread Clark Boylan
On Jul 21, 2014 8:28 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

 Hi all,

 To my surprise I found that we default to using MyISAM in the gate
 [1], while InnoDB would be a much more suitable choice, which people
 use in production deployments (== we should test it in the gate). This
 means, that every table, for which we haven't explicitly specified to
 use InnoDB, will be created using MyISAM engine, which is clearly not
 what we want (and we have migration scripts at least in Neutron which
 don't specify InnoDB explicitly and rely on MySQL configuration
 value).

 Is there any specific reason we default to MyISAM? Or I should submit
 a patch changing the default storage engine to be InnoDB?

We want projects to force the use of innodb over myisam. To test this the
gate defaults to myisam and should check that innodb is used instead by the
projects. So this is very intentional.

Are we missing those checks in places?

 Thanks,
 Roman

 [1]
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/manifests/slave_db.pp#L12

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Clark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] MyISAM as a default storage engine for MySQL in the gate

2014-07-21 Thread Roman Podoliaka
Hi all,

To my surprise I found that we default to using MyISAM in the gate
[1], while InnoDB would be a much more suitable choice, which people
use in production deployments (== we should test it in the gate). This
means, that every table, for which we haven't explicitly specified to
use InnoDB, will be created using MyISAM engine, which is clearly not
what we want (and we have migration scripts at least in Neutron which
don't specify InnoDB explicitly and rely on MySQL configuration
value).

Is there any specific reason we default to MyISAM? Or I should submit
a patch changing the default storage engine to be InnoDB?

Thanks,
Roman

[1] 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/manifests/slave_db.pp#L12

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Nova][Scheduler] Prompt select_destination as a REST API

2014-07-21 Thread Jay Lau
Now in OpenStack Nova, select_destination is used by
create/rebuild/migrate/evacuate VM when selecting target host for those
operations.

There is one requirement that some customers want to get the possible host
list when create/rebuild/migrate/evacuate VM so as to create a resource
plan for those operations, but currently select_destination is not a REST
API, is it possible that we prompt this API to be a REST API?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.serialization and oslo.concurrency graduation call for help

2014-07-21 Thread Ben Nemec
Hi all,

The oslo.serialization and oslo.concurrency graduation specs are both
approved, but unfortunately I haven't made as much progress on them as I
would like.  The serialization repo has been created and has enough acks
to continue the process, and concurrency still needs to be started.

Also unfortunately, I am unlikely to make progress on either over the
next two weeks due to the tripleo meetup and vacation.  As discussed in
the Oslo meeting last week
(http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-18-16.00.log.html)
we would like to continue work on them during that time, so Doug asked
me to look for volunteers to pick up the work and run with it.

The current status and next steps for oslo.serialization can be found in
the bp:
https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-serialization

As mentioned, oslo.concurrency isn't started and has a few more pending
tasks, which are enumerated in the spec:
http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/graduate-oslo-concurrency.rst

Any help would be appreciated.  I'm happy to pick this back up in a
couple of weeks, but if someone could shepherd it along in the meantime
that would be great!

Thanks.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Jay Lau
Sorry, correct one typo. I mean Promote select_destination as a REST API


2014-07-21 23:49 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Now in OpenStack Nova, select_destination is used by
 create/rebuild/migrate/evacuate VM when selecting target host for those
 operations.

 There is one requirement that some customers want to get the possible host
 list when create/rebuild/migrate/evacuate VM so as to create a resource
 plan for those operations, but currently select_destination is not a REST
 API, is it possible that we promote this API to be a REST API?

 --
 Thanks,

 Jay




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

2014-07-21 Thread Jay S. Bryant
Johnson,

I am not sure what you mean by 'attach volume manually'.  Do you mean
when you do a 'nova volume-attach'?  If so, then, yes, the process will
use the appropriate iscsi_helper and iscsi_ip_address to configure the
attachment.

Does that answer your question?

Jay

On Mon, 2014-07-21 at 12:23 +, Johnson Cheng wrote:
 Dear Thomas,
 
 Thanks for your reply.
 So when I attach volume manually, will iSCSI LUN be automatically setup via 
 cinder.conf (iscsi_helper and iscsi_ip_address)?
 
 
 Regards,
 Johnson
 
 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
 Sent: Monday, July 21, 2014 6:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question
 
 The iSCSI lun won't be set up until you try to attach the volume
 
 On 17 July 2014 12:44, Johnson Cheng johnson.ch...@qsantechnology.com wrote:
  Dear All,
 
 
 
  I installed iSCSI target at my controller node (IP: 192.168.106.20),
 
  #iscsitarget open-iscsi iscsitarget-dkms
 
 
 
  then modify my cinder.conf at controller node as below,
 
  [DEFAULT]
 
  rootwrap_config = /etc/cinder/rootwrap.conf
 
  api_paste_confg = /etc/cinder/api-paste.ini
 
  #iscsi_helper = tgtadm
 
  iscsi_helper = ietadm
 
  volume_name_template = volume-%s
 
  volume_group = cinder-volumes
 
  verbose = True
 
  auth_strategy = keystone
 
  #state_path = /var/lib/cinder
 
  #lock_path = /var/lock/cinder
 
  #volumes_dir = /var/lib/cinder/volumes
 
  iscsi_ip_address=192.168.106.20
 
 
 
  rpc_backend = cinder.openstack.common.rpc.impl_kombu
 
  rabbit_host = controller
 
  rabbit_port = 5672
 
  rabbit_userid = guest
 
  rabbit_password = demo
 
 
 
  glance_host = controller
 
 
 
  enabled_backends=lvmdriver-1,lvmdriver-2
 
  [lvmdriver-1]
 
  volume_group=cinder-volumes-1
 
  volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 
  volume_backend_name=LVM_iSCSI
 
  [lvmdriver-2]
 
  volume_group=cinder-volumes-2
 
  volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 
  volume_backend_name=LVM_iSCSI_b
 
  [database]
 
  connection = mysql://cinder:demo@controller/cinder
 
 
 
  [keystone_authtoken]
 
  auth_uri = http://controller:5000
 
  auth_host = controller
 
  auth_port = 35357
 
  auth_protocol = http
 
  admin_tenant_name = service
 
  admin_user = cinder
 
  admin_password = demo
 
 
 
  Now I use the following command to create a cinder volume, and it can 
  be created successfully.
 
  # cinder create --volume-type lvm_controller --display-name vol 1
 
 
 
  Unfortunately it seems not attach to a iSCSI LUN automatically because 
  I can not discover it from iSCSI initiator,
 
  # iscsiadm -m discovery -t st -p 192.168.106.20
 
 
 
  Do I miss something?
 
 
 
 
 
  Regards,
 
  Johnson
 
 
 
 
 
  From: Manickam, Kanagaraj [mailto:kanagaraj.manic...@hp.com]
  Sent: Thursday, July 17, 2014 1:19 PM
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target 
  Question
 
 
 
  I think, It should be on the cinder node which is usually deployed on 
  the controller node
 
 
 
  From: Johnson Cheng [mailto:johnson.ch...@qsantechnology.com]
  Sent: Thursday, July 17, 2014 10:38 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Cinder] Integrated with iSCSI target 
  Question
 
 
 
  Dear All,
 
 
 
  I have three nodes, a controller node and two compute nodes(volume node).
 
  The default value for iscsi_helper in cinder.conf is “tgtadm”, I will 
  change to “ietadm” to integrate with iSCSI target.
 
  Unfortunately I am not sure that iscsitarget should be installed at 
  controller node or compute node?
 
  Have any reference?
 
 
 
 
 
  Regards,
 
  Johnson
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Duncan Thomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] MyISAM as a default storage engine for MySQL in the gate

2014-07-21 Thread Roman Podoliaka
Aha, makes sense. Yeah, this means we miss such a check at least in
Neutron and should add one to the test suite. Thanks!

On Mon, Jul 21, 2014 at 6:34 PM, Clark Boylan clark.boy...@gmail.com wrote:

 On Jul 21, 2014 8:28 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

 Hi all,

 To my surprise I found that we default to using MyISAM in the gate
 [1], while InnoDB would be a much more suitable choice, which people
 use in production deployments (== we should test it in the gate). This
 means, that every table, for which we haven't explicitly specified to
 use InnoDB, will be created using MyISAM engine, which is clearly not
 what we want (and we have migration scripts at least in Neutron which
 don't specify InnoDB explicitly and rely on MySQL configuration
 value).

 Is there any specific reason we default to MyISAM? Or I should submit
 a patch changing the default storage engine to be InnoDB?

 We want projects to force the use of innodb over myisam. To test this the
 gate defaults to myisam and should check that innodb is used instead by the
 projects. So this is very intentional.

 Are we missing those checks in places?



 Thanks,
 Roman

 [1]
 https://github.com/openstack-infra/config/blob/master/modules/openstack_project/manifests/slave_db.pp#L12

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Clark


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec freeze exception] Online Schema Changes

2014-07-21 Thread Kevin L. Mitchell
On Mon, 2014-07-21 at 10:55 +0100, John Garbutt wrote:
 On 19 July 2014 03:53, Johannes Erdfelt johan...@erdfelt.com wrote:
  I'm requestion a spec freeze exception for online schema changes.
 
  https://review.openstack.org/102545
 
  This work is being done to try to minimize the downtime as part of
  upgrades. Database migrations have historically been a source of long
  periods of downtime. The spec is an attempt to start optimizing this
  part by allowing deployers to perform most schema changes online, while
  Nova is running.
 
 Improving upgrades is high priority, and I feel it will help reduce
 the amount of downtime required when performing database migrations.
 
 So I am happy to sponsor this.

I will also sponsor this for an exception.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-21 Thread Kurt Griffiths
I think Wednesday would be best. That way we can get an update on all the
bugs and blueprints before the weekly 1:1 project status meetings with
Thierry on Thursday. Mondays are often pretty busy with everyone having
meetings and catchup from the weekend.

If we do 2100 UTC, that is 9am NZT. Shall we alternate between 1900 and
2100 UTC on Wednesdays?

Also, when will we meet this week? Perhaps we should keep things the same
one more time while we get the new schedule finalized here on the ML.

On 7/17/14, 11:27 AM, Flavio Percoco fla...@redhat.com wrote:

On 07/16/2014 06:31 PM, Malini Kamalambal wrote:
 
 On 7/16/14 4:43 AM, Flavio Percoco fla...@redhat.com wrote:
 
 On 07/15/2014 06:20 PM, Kurt Griffiths wrote:
 Hi folks, we¹ve been talking about this in IRC, but I wanted to bring
it
 to the ML to get broader feedback and make sure everyone is aware.
We¹d
 like to change our meeting time to better accommodate folks that live
 around the globe. Proposals:

 Tuesdays, 1900 UTC
 Wednesdays, 2000 UTC
 Wednesdays, 2100 UTC

 I believe these time slots are free, based
 on: https://wiki.openstack.org/wiki/Meetings

 Please respond with ONE of the following:

 A. None of these times work for me
 B. An ordered list of the above times, by preference
 C. I am a robot

 I don't like the idea of switching days :/

 Since the reason we're using Wednesday is because we don't want the
 meeting to overlap with the TC and projects meeting, what if we change
 the day of both meeting times in order to keep them on the same day
(and
 perhaps also channel) but on different times?

 I think changing day and time will be more confusing than just changing
 the time.
 
 If we can find an agreeable time on a non Tuesday, I take the ownership
of
 pinging  getting you to #openstack-meeting-alt ;)
 
From a quick look, #openstack-meeting-alt is free on Wednesdays on both
 times: 15 UTC and 21 UTC. Does this sound like a good day/time/idea to
 folks?
 
 1500 UTC might still be too early for our NZ folks - I thought we wanted
 to have the meeting at/after 1900 UTC.
 That being said, I will be able to attend only part of the meeting any
 time after 1900 UTC - unless it is @ Thursday 1900 UTC
 Sorry for making this a puzzle :(

We'll have 2 times. The idea is to keep the current time and have a
second time slot that is good for NZ folks. What I'm proposing is to
pick a day in the week that is good for both times and just rotate on
the time instead of time+day_of_the_week.

Again, the proposal is not to have 1 time but just 1 day and alternate
times on that day. For example, Glance meetings are *always* on
Thursdays and time is alternated each other week. We can do the same for
Marconi on Mondays, Wednesdays or Fridays.

Thoughts?


Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Approval Deadline (SAD) has passed, next steps

2014-07-21 Thread YAMAMOTO Takashi
 Hi all!
 
 A quick note that SAD has passed. We briskly approved a pile of BPs

it's sad. ;-(

 over the weekend, most of them vendor related as low priority, best
 effort attempts for Juno-3. At this point, we're hugely oversubscribed
 for Juno-3, so it's unlikely we'll make exceptions for things into
 Juno-3 now.

my specs are ok'ed by Kyle but failed to get another core reviewer.
https://review.openstack.org/#/c/98702/
https://review.openstack.org/#/c/103737/

does it indicate core reviewers man-power problems?
if so, can you consider increasing the number of them?
postponing vendor stuffs (like mine) for the reason would make
the situation worse as many of developers/reviewers are paid for
vendor stuffs.

YAMAMOTO Takashi

 
 I don't plan to open a Kilo directory in the specs repository quite
 yet. I'd like to first let things settle down a bit with Juno-3 before
 going there. Once I do, specs which were not approved should be moved
 to that directory where they can be reviewed with the idea they are
 targeting Kilo instead of Juno.
 
 Also, just a note that we have a handful of bugs and BPs we're trying
 to land in Juno-3 yet today, so core reviewers, please focus on those
 today.
 
 Thanks!
 Kyle
 
 [1] https://launchpad.net/neutron/+milestone/juno-2
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Chris Friesen

On 07/21/2014 09:52 AM, Jay Lau wrote:

Sorry, correct one typo. I mean Promote select_destination as a REST API


2014-07-21 23:49 GMT+08:00 Jay Lau jay.lau@gmail.com
mailto:jay.lau@gmail.com:

Now in OpenStack Nova, select_destination is used by
create/rebuild/migrate/evacuate VM when selecting target host for
those operations.

There is one requirement that some customers want to get the
possible host list when create/rebuild/migrate/evacuate VM so as to
create a resource plan for those operations, but currently
select_destination is not a REST API, is it possible that we promote
this API to be a REST API?


How would that work, given that when they go to actually perform the 
operation the conditions may have changed and the selected destination 
may be different?


Or is the idea that they would do a select_destination call, and then 
call the create/rebuild/migrate/evacuate while specifying the selected 
destination?


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Sylvain Bauza
Le 21/07/2014 17:52, Jay Lau a écrit :
 Sorry, correct one typo. I mean Promote select_destination as a REST API



-1 to it. During last Summit, we agreed on externalizing current
Scheduler code into a separate project called Gantt. For that, we agreed
on first doing necessary changes within the Scheduler before recreating
a new repository.

By providing select_destinations as a new API endpoint, it would create
a disruptive change where the Scheduler would have a new entrypoint.

As this change would need a spec anyway and as there is a Spec Freeze
now for Juno, I propose to delay this proposal until Gantt is created
and propose a REST API for Gantt instead (in Kilo or L)

-Sylvain


 2014-07-21 23:49 GMT+08:00 Jay Lau jay.lau@gmail.com
 mailto:jay.lau@gmail.com:

 Now in OpenStack Nova, select_destination is used by
 create/rebuild/migrate/evacuate VM when selecting target host for
 those operations.

 There is one requirement that some customers want to get the
 possible host list when create/rebuild/migrate/evacuate VM so as
 to create a resource plan for those operations, but currently
 select_destination is not a REST API, is it possible that we
 promote this API to be a REST API?

 -- 
 Thanks,

 Jay




 -- 
 Thanks,

 Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] MyISAM as a default storage engine for MySQL in the gate

2014-07-21 Thread Mike Bayer
OK, so, we aren’t generally running neutron tests w/ MySQL + InnoDB, right?
I happen to be running them locally against a MySQL that defaults to InnoDB.  
And I’m trying to see if it’s deadlocking or not as I’m not able to get through 
them.   All the eventlet + MySQLdb deadlock issues won’t be apparent with 
MyISAM.   




On Jul 21, 2014, at 11:55 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

 Aha, makes sense. Yeah, this means we miss such a check at least in
 Neutron and should add one to the test suite. Thanks!
 
 On Mon, Jul 21, 2014 at 6:34 PM, Clark Boylan clark.boy...@gmail.com wrote:
 
 On Jul 21, 2014 8:28 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:
 
 Hi all,
 
 To my surprise I found that we default to using MyISAM in the gate
 [1], while InnoDB would be a much more suitable choice, which people
 use in production deployments (== we should test it in the gate). This
 means, that every table, for which we haven't explicitly specified to
 use InnoDB, will be created using MyISAM engine, which is clearly not
 what we want (and we have migration scripts at least in Neutron which
 don't specify InnoDB explicitly and rely on MySQL configuration
 value).
 
 Is there any specific reason we default to MyISAM? Or I should submit
 a patch changing the default storage engine to be InnoDB?
 
 We want projects to force the use of innodb over myisam. To test this the
 gate defaults to myisam and should check that innodb is used instead by the
 projects. So this is very intentional.
 
 Are we missing those checks in places?
 
 
 
 Thanks,
 Roman
 
 [1]
 https://github.com/openstack-infra/config/blob/master/modules/openstack_project/manifests/slave_db.pp#L12
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Clark
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance]code review needed for 'changing HTTP response code on errors'

2014-07-21 Thread Wang, Kent
Hi I'm looking for some reviewers (especially core reviewers!) to review my 
patch that fixes this bug.

This is the bug description:

Glance v2: HTTP 404s are returned for unallowed methods
Requests for many resources in Glance v2 will return a 404 if the request is 
using an unsupported HTTP verb for that resource. For example, the /v2/images 
resource does exist but a 404 is returned when attempting a DELETE on that 
resource. Instead, this should return an HTTP 405 MethodNotAllowed response.


My fix for it can be found here:
https://review.openstack.org/#/c/103959/

Thanks!
Kent
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting minutes/log - 07/21/2014

2014-07-21 Thread Renat Akhmerov
Thanks for joining the meeting today!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-07-21-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-07-21-16.00.log.html

The next meeting will be held on July 28th.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Weekly networking meeting today

2014-07-21 Thread Kyle Mestery
I'd like to have a short meeting today, say 30 minutes. I'd like to
focus on the final Juno-2 BPs which have code out for review, and also
briefly touch on SAD, exceptions, etc. We'll still meet at the same
time [1], but this will be a short meeting.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-21 Thread Samuel Merritt

On 7/21/14, 3:38 AM, Matthew Booth wrote:

[snip]

I would like to make the radical proposal that we stop gating on CI
failures. We will continue to run them on every change, but only after
the change has been successfully merged.

Benefits:
* Without rechecks, the gate will use 8 times fewer resources.
* Log analysis is still available to indicate the emergence of races.
* Fixes can be merged quicker.
* Vastly less developer time spent monitoring gate failures.

Costs:
* A rare class of merge bug will make it into master.

Note that the benefits above will also offset the cost of resolving this
rare class of merge bug.


I think this is definitely a move in the right direction, but I'd like 
to propose a slight modification: let's cease blocking changes on 
*known* CI failures.


More precisely, if Elastic Recheck knows about all the failures that 
happened on a test run, treat that test run as successful.


I think this will gain virtually all the benefits you name while still 
retaining most of the gate's ability to keep breaking changes out.


As a bonus, it'll encourage people to make Elastic Recheck better. 
Currently, the easy path is to just type recheck no bug and click 
submit; it takes a lot less time than scrutinizing log files to guess 
at what went wrong. If failures identified by E-R don't block 
developers' changes, then the easy path is to improve E-R's checks, 
which benefits everyone.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-21 Thread Clint Byrum
Thanks Matthew for the analysis.

I think you missed something though.

Right now the frustration is that unrelated intermittent bugs stop your
presumably good change from getting in.

Without gating, the result would be that even more bugs, many of them not
intermittent at all, would get in. Right now, the one random developer
who has to hunt down the rechecks and do them is inconvenienced. But
without a gate, _every single_ developer will be inconvenienced until
the fix is merged.

The false negative rate is _way_ too high. Nobody would disagree there.
However, adding more false negatives and allowing more people to ignore
the ones we already have, seems like it would have the opposite effect:
Now instead of annoying the people who hit the random intermittent bugs,
we'll be annoying _everybody_ as they hit the non-intermittent ones.

Excerpts from Matthew Booth's message of 2014-07-21 03:38:07 -0700:
 On Friday evening I had a dependent series of 5 changes all with
 approval waiting to be merged. These were all refactor changes in the
 VMware driver. The changes were:
 
 * VMware: DatastorePath join() and __eq__()
 https://review.openstack.org/#/c/103949/
 
 * VMware: use datastore classes get_allowed_datastores/_sub_folder
 https://review.openstack.org/#/c/103950/
 
 * VMware: use datastore classes in file_move/delete/exists, mkdir
 https://review.openstack.org/#/c/103951/
 
 * VMware: Trivial indentation cleanups in vmops
 https://review.openstack.org/#/c/104149/
 
 * VMware: Convert vmops to use instance as an object
 https://review.openstack.org/#/c/104144/
 
 The last change merged this morning.
 
 In order to merge these changes, over the weekend I manually submitted:
 
 * 35 rechecks due to false negatives, an average of 7 per change
 * 19 resubmissions after a change passed, but its dependency did not
 
 Other interesting numbers:
 
 * 16 unique bugs
 * An 87% false negative rate
 * 0 bugs found in the change under test
 
 Because we don't fail fast, that is an average of at least 7.3 hours in
 the gate. Much more in fact, because some runs fail on the second pass,
 not the first. Because we don't resubmit automatically, that is only if
 a developer is actively monitoring the process continuously, and
 resubmits immediately on failure. In practise this is much longer,
 because sometimes we have to sleep.
 
 All of the above numbers are counted from the change receiving an
 approval +2 until final merging. There were far more failures than this
 during the approval process.
 
 Why do we test individual changes in the gate? The purpose is to find
 errors *in the change under test*. By the above numbers, it has failed
 to achieve this at least 16 times previously.
 
 Probability of finding a bug in the change under test: Small
 Cost of testing:   High
 Opportunity cost of slowing development:   High
 
 and for comparison:
 
 Cost of reverting rare false positives:Small
 
 The current process expends a lot of resources, and does not achieve its
 goal of finding bugs *in the changes under test*. In addition to using a
 lot of technical resources, it also prevents good change from making its
 way into the project and, not unimportantly, saps the will to live of
 its victims. The cost of the process is overwhelmingly greater than its
 benefits. The gate process as it stands is a significant net negative to
 the project.
 
 Does this mean that it is worthless to run these tests? Absolutely not!
 These tests are vital to highlight a severe quality deficiency in
 OpenStack. Not addressing this is, imho, an existential risk to the
 project. However, the current approach is to pick contributors from the
 community at random and hold them personally responsible for project
 bugs selected at random. Not only has this approach failed, it is
 impractical, unreasonable, and poisonous to the community at large. It
 is also unrelated to the purpose of gate testing, which is to find bugs
 *in the changes under test*.
 
 I would like to make the radical proposal that we stop gating on CI
 failures. We will continue to run them on every change, but only after
 the change has been successfully merged.
 
 Benefits:
 * Without rechecks, the gate will use 8 times fewer resources.
 * Log analysis is still available to indicate the emergence of races.
 * Fixes can be merged quicker.
 * Vastly less developer time spent monitoring gate failures.
 
 Costs:
 * A rare class of merge bug will make it into master.
 
 Note that the benefits above will also offset the cost of resolving this
 rare class of merge bug.
 
 Of course, we still have the problem of finding resources to monitor and
 fix CI failures. An additional benefit of not gating on CI will be that
 we can no longer pretend that picking developers for project-affecting
 bugs by lottery is likely to achieve results. As a project we need to
 understand the importance of CI failures. We need a proper 

Re: [openstack-dev] [Neutron][LBaaS] Milestone and Due Dates

2014-07-21 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 11:51 AM, Jorge Miramontes
jorge.miramon...@rackspace.com wrote:
 Hey Kyle,

 I've viewed that link many times but it mentions nothing about 7-20 being
 Spec approval deadline. Am I missing something?

Hi Jorge, this was not in that wiki. It was communicated on the
mailing list here [1]. Basically, SPD/SAD are something new which we
came up with in the weekly project meeting in June. Only Nova and
Neutron participated in this during the Juno cycle, and Nova had a
much earlier SPD/SAD.

Thanks!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/039138.html

 Cheers,
 --Jorge




 On 7/18/14 9:52 PM, Kyle Mestery mest...@mestery.com wrote:

On Fri, Jul 18, 2014 at 4:40 PM, Jorge Miramontes
jorge.miramon...@rackspace.com wrote:
 Hey Kyle (and anyone else that may know the answers to my questions),

 There are several blueprints that don't have Juno milestones attached to
 them and was wondering if we could assign them so the broader community
is
 aware of the work the LBaaS folks are working on. These are the
blueprints
 that are currently being worked on but do not have an assigned
milestone:


https://blueprints.launchpad.net/neutron/+spec/lbaas-ref-impl-tls-support
 (no milestone)
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ssl-termination
('next'
 milestone. Not sure if this means juno-2 or juno-3)
 https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules (no
milestone)
 https://blueprints.launchpad.net/neutron/+spec/neutron-flavor-framework
(no
 milestone)
 https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules (no
milestone)

These do not have a milestone set in LP yet because the specs are not
approved. It's unclear if all of these will be approved for Juno-3 at
this point, though I suspect at least a few will be. I'm actively
reviewing final specs for approval before Spec Approval Deadline on
Sunday, 7-20.

 Also, please let me know if I left something out everyone.

 Lastly, what are the definitive spec/implementation dates that the LBaaS
 community should be aware of? A lot of us are confused on exact dates
and I
 wanted to make sure we were all on the same page so that we can put
 resources on items that are more time sensitive.

Per above, SAD is this Sunday. The Juno release schedule is on the
wiki here [1].

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Juno_Release_Schedule

 Cheers,
 --Jorge

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Spec freeze exception] Cisco Nexus ML2 driver feature work

2014-07-21 Thread Henry Gessau
I would like to request Juno spec freeze exceptions for the following, all of
which add features to the ML2 driver for the Cisco Nexus family of switches.


https://review.openstack.org/95834  - Provider Segment Support
https://review.openstack.org/95910  - Layer 3 Service plugin

The above two feature are needed for the Nexus ML2 driver to reach feature
parity with the legacy Cisco Nexus plugin, which is going to be deprecated
because it depends on the OVS plugin which is being deprecated.


https://review.openstack.org/98177  - VxLAN Gateway Support

The dependencies for this one are now approved. It could be approved as a
low-priority item with the caveat of best effort for review of code patches.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] [QA] Status update // SFE for deprecation of Baremetal

2014-07-21 Thread Dan Smith
 In addition to seeking a spec-freeze-exception for 95025, I would also
 like some clarification of the requirement to test this upgrade
 path. Some nova-core folks have pointed out that they do not want to
 accept the nova.virt.ironic driver until the upgrade path from
 nova.virt.baremetal *has test coverage*, but what that means is not
 clear to me. It's been suggested that we use grenade (I am pretty sure
 Sean suggested this at the summit, and I wrote it into my spec proposal
 soon thereafter). After looking into grenade, I don't think it is the
 right tool to test with, and I'm concerned that no one pointed this out
 sooner.

Grenade is our release test tool, so I think that, barring details, it's
reasonable to use $GRENADE when talking about this sort of thing. I
didn't realize that nova-bm doesn't work in devstack until you pointed
it out in IRC last week. Since we're pretty good about requiring
devstack support for new things like drivers, I would have expected
nova-bm to work there, but obviously times were a bit different when
that driver was merged.

 Philosophically, this isn't an upgrade of one service from version X to
 Y. It's a replacement of one nova driver with a completely different
 driver. As I understand it, that's not what grenade is for. But maybe
 I'm wrong on this, or maybe it's flexible.

I think it's start devstack on release X, validate, do some work,
re-start devstack on release Y, validate. I'm not sure that it's
ill-suited for this, but IANAGE.

 I also have a technical objection: even if devstack can start and
 properly configure nova.virt.baremteal (which I doubt, because it isn't
 tested at all), it is going to fail the tempest/api/compute test suite
 horribly. The baremetal driver never passed tempest, and never had
 devstack-gate support. This matters because grenade uses tempest to
 validate a stack pre- and post-upgrade. Therefore, since we know that
 the old code is going to fail tempest, requiring grenade testing as a
 precondition to accepting the ironic driver effectively means we need to
 go develop the baremetal driver to a point it could pass tempest. I'm
 going to assume no one is actually suggesting that, and instead believe
 that none of us thought this through.
 
 (FWIW, Ironic doesn't pass the tempest/api/compute suite today, but
 we're working hard on it.)

Do the devstack exercises pass? We test things like cells today (/me
hears sdague scream in the background), which don't pass tempest, using
the exercises to make sure it's at least able to create an instance.

 So, I'd like to ask for suggestions on what sort of upgrade testing is
 reasonable here. I'll toss out two ideas:
 - load some fake data into the nova_bm schema, run the upgrade scripts,
 start ironic, issue some API queries, and see if the data's correct
 - start devstack, load some real data into the nova_bm schema, run the
 upgrade scripts, then try to deploy an instance with ironic

These were my suggestions last week, so I'll own up to them now.
Obviously I think that something using grenade that goes from a
functional environment on release X to a functional environment on
release Y is best. However, I of course don't think it makes sense to
spend a ton of time getting nova-bm to pass tempest just so we can shoot
it in the head.

I'm not really sure what to do here. I think that we need an upgrade
path, and that it needs to be tested. I don't think our users would
appreciate us removing any other virt driver and replacing it with a new
one, avoiding an upgrade path because it's a different driver now. I
also don't want to spend a bunch of time on nova-bm, which we have
already neglected in our other test requirements (which is maybe part of
the problem here).

Assuming grenade can be flexible about what it runs against the old and
new environments to determine workyness, then I think the second
option above is probably a pretty good level of assurance, given where
we are right now.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Heat] Heat is not able to create swift cloud server

2014-07-21 Thread Clint Byrum
Excerpts from Peeyush Gupta's message of 2014-07-20 23:13:16 -0700:
 Hi all,
 
 I have been trying to set up tripleo using instack with RDO.
 Now, when deploying overcloud, the script is failing consistently
 with CREATE_FAILED error:
 
 + heat stack-create -f overcloud.yaml -P 
 AdminToken=efe958561450ba61d7ef8249d29b0be1ba95dc11 -P 
 AdminPassword=2b919f2ac7790ca1053ac58bc4621ca0967a0cba -P 
 CinderPassword=e7d61883a573a3dffc65a5fb958c94686baac848 -P 
 GlancePassword=cb896d6392e08241d504f3a0a2b489fc6f2612dd -P 
 HeatPassword=7a3138ef58365bb666cb30c8377447b74e75a0ef -P 
 NeutronPassword=4480ec8f2e004be4b06d14e1e228d882e18b3c2c -P 
 NovaPassword=e4a34b6caeeb7dbc497fb1c557a396c422b4d103 -P 
 NeutronPublicInterface=eth0 -P 
 SwiftPassword=ed3761a03959e0d636b8d6fc826103734069f9dc -P 
 SwiftHashSuffix=1a26593813bb7d6b38418db747b4243d4f1b5a56 -P 
 NovaComputeLibvirtType=qemu -P 'GlanceLogFile='\'''\''' -P 
 NeutronDnsmasqOptions=dhcp-option-force=26,1400 overcloud
 +--+++--+
 | id                                   | stack_name | stack_status       | 
 creation_time        |
 +--+++--+
 | 737ada9f-aa45-45b6-a42b-c0a496d2407e | overcloud  | CREATE_IN_PROGRESS | 
 2014-07-21T06:02:22Z |
 +--+++--+
 + tripleo wait_for_stack_ready 220 10 overcloud
 Command output matched 'CREATE_FAILED'. Exiting...
 
 Here is the heat log:
 
 
 2014-07-18 06:51:11.884 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:12.921 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:16.058 30750 ERROR heat.engine.resource [-] CREATE : Server 
 SwiftStorage0 [07e42c3d-0f1b-4bb9-b980-ffbb74ac770d] Stack overcloud 
 [0ca028e7-682b-41ef-8af0-b2eb67bee272]
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Traceback (most 
 recent call last):
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File 
 /usr/lib/python2.7/site-packages/heat/engine/resource.py, line 420, in 
 _do_action
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource while not 
 check(handle_data):
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File 
 /usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 545, 
 in check_create_complete
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource return 
 self._check_active(server)
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File 
 /usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 561, 
 in _check_active
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource raise exc
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Error: Creation of 
 server overcloud-SwiftStorage0-qdjqbif6peva failed.
 2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource
 2014-07-18 06:51:16.255 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:16.939 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:17.368 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:17.638 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:18.158 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:18.613 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:19.113 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:19.765 30750 WARNING heat.common.keystoneclient [-] 
 stack_user_domain ID not set in heat.conf falling back to using default
 2014-07-18 06:51:20.247 30750 WARNING heat.engine.service [-] Stack create 
 failed, status FAILED
 
 How can I resolve this?

Heat is just responding to Nova. You need to look at nova and find out
why that server failed. 'nova show overcloud-SwiftStorage0-qdjqbif6peva'
should work.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-07-21 Thread Kevin Benton
I see. So then back to my other question, is it possible to get access to
the same branch that is being passed to the OpenStack CI devstack tests?

For example, in the console output I can see it uses a ref
like refs/zuul/master/Z75ac747d605b4eb28d4add7fa5b99890.[1]
Is that visible somewhere (other than the logs of course) could be used in
a third-party system?

Thanks

1.
http://logs.openstack.org/64/83664/17/check/gate-neutron-python27/db29f20/console.html.gz


On Sun, Jul 13, 2014 at 3:36 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-07-13 00:09:11 -0700 (-0700), Kevin Benton wrote:
 [...]
  Does Zuul only cherry-pick the top commit of the proposed patch
  instead of merging the proposed patch's branch into master (which
  would merge all dependent patchsets)?

 In an independent pipeline, Zuul tests the change as merged to the
 tip of the target branch along with any other changes that change
 depends on in Gerrit.
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally] PTL Candidacy

2014-07-21 Thread Boris Pavlovic
Hi,

I would like to propose my candidacy for Rally PTL.

I started this project to make benchmarking of OpenStack simple as
possible. This means not only load generation, but as well OpenStack
specific benchmark framework, data analyze and integration with gates. All
these things should make it simple for developers and operators to
benchmark (perf, scale, stress test) OpenStack, share experiments 
results, and have a fast way to find what produce bottleneck or just to
ensure that OpenStack works well under load that they are expecting.

I am current non official PTL and in my responsibilities are such things
like:
1) Adoption of Rally architecture to cover everybody's use cases
2) Building  managing work of community
3) Writing a lot of code
4) Working on docs  wiki
5) Helping newbies to join Rally team

As a PTL I would like to continue work and finish my initial goal:
1) Ensure that everybody's use cases are fully covered
2) There is no monopoly in project
3) Run Rally in gates of all OpenStack projects (currently we have check
jobs in Keystone, Cinder, Glance  Neutron)
4) Continue work on making project more mature. It covers such topics like
increasing unit and functional test coverage and making Rally absolutely
safe to run against any production cloud)


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Armando M.
I think the specs under the umbrella one can be approved/treated
individually.

The umbrella one is an informational blueprint, there is not going to be
code associated with it, however before approving it (and the individual
ones) we'd need all the parties interested in vsphere support for Neutron
to reach an agreement as to what the code will look like so that the
individual contributions being proposed are not going to clash with each
other or create needless duplication.




On 21 July 2014 06:11, Kyle Mestery mest...@mestery.com wrote:

 On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton gkot...@vmware.com wrote:
  Hi,
  I would like to propose the following for spec freeze exception:
 
  https://review.openstack.org/#/c/105369
 
  This is an umbrella spec for a number of VMware DVS support specs. Each
 has
  its own unique use case and will enable a lot of existing VMware DVS
 users
  to start to use OpenStack.
 
  For https://review.openstack.org/#/c/102720/ we have the following
 which we
  can post when the internal CI for the NSX-v is ready (we are currently
  working on this):
   - core plugin functionality
   - layer 3 support
   - security group support
 
 Do we need to approve all the under the umbrella specs as well?

  Thanks
  Gary
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (daniel krause)

2014-07-21 Thread Changbin Liu
+1


Thanks

Changbin


On Fri, Jul 18, 2014 at 4:08 PM, Joshua Harlow harlo...@outlook.com wrote:

 Greetings all stackers,

 I propose that we add Daniel Krause[1] to the taskflow-core team[2].

 Daniel has been actively contributing to taskflow for a while now, both in
 helping prove taskflow out (by being a user as well) and helping with the
 review load. He has provided quality reviews and is doing an awesome job
 with
 the various taskflow concepts and helping make taskflow the best library
 it can
 be!

 Overall I think he would make a great addition to the core review team.

 Please respond with +1/-1.

 Thanks much!

 --

 Joshua Harlow

 It's openstack, relax... | harlo...@yahoo-inc.com

 [1] https://launchpad.net/~d-krause
 [2] https://wiki.openstack.org/wiki/TaskFlow/CoreTeam


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 2:03 PM, Armando M. arma...@gmail.com wrote:
 I think the specs under the umbrella one can be approved/treated
 individually.

 The umbrella one is an informational blueprint, there is not going to be
 code associated with it, however before approving it (and the individual
 ones) we'd need all the parties interested in vsphere support for Neutron to
 reach an agreement as to what the code will look like so that the individual
 contributions being proposed are not going to clash with each other or
 create needless duplication.

That's what I was thinking as well. So, given where we're at in Juno,
I'm leaning towards having all of this consensus building happen now
and we can start the Kilo cycle with these BPs in agreement from all
contributors.

Does that sound ok?

Thanks,
Kyle




 On 21 July 2014 06:11, Kyle Mestery mest...@mestery.com wrote:

 On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton gkot...@vmware.com wrote:
  Hi,
  I would like to propose the following for spec freeze exception:
 
  https://review.openstack.org/#/c/105369
 
  This is an umbrella spec for a number of VMware DVS support specs. Each
  has
  its own unique use case and will enable a lot of existing VMware DVS
  users
  to start to use OpenStack.
 
  For https://review.openstack.org/#/c/102720/ we have the following which
  we
  can post when the internal CI for the NSX-v is ready (we are currently
  working on this):
   - core plugin functionality
   - layer 3 support
   - security group support
 
 Do we need to approve all the under the umbrella specs as well?

  Thanks
  Gary
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Armando M.
That would be my thinking as well, but if we managed to make an impressive
progress from now until the Feature Freeze proposal deadline, I'd be
willing to reevaluate the situation.

A.


On 21 July 2014 12:13, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Jul 21, 2014 at 2:03 PM, Armando M. arma...@gmail.com wrote:
  I think the specs under the umbrella one can be approved/treated
  individually.
 
  The umbrella one is an informational blueprint, there is not going to be
  code associated with it, however before approving it (and the individual
  ones) we'd need all the parties interested in vsphere support for
 Neutron to
  reach an agreement as to what the code will look like so that the
 individual
  contributions being proposed are not going to clash with each other or
  create needless duplication.
 
 That's what I was thinking as well. So, given where we're at in Juno,
 I'm leaning towards having all of this consensus building happen now
 and we can start the Kilo cycle with these BPs in agreement from all
 contributors.

 Does that sound ok?

 Thanks,
 Kyle

 
 
 
  On 21 July 2014 06:11, Kyle Mestery mest...@mestery.com wrote:
 
  On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton gkot...@vmware.com
 wrote:
   Hi,
   I would like to propose the following for spec freeze exception:
  
   https://review.openstack.org/#/c/105369
  
   This is an umbrella spec for a number of VMware DVS support specs.
 Each
   has
   its own unique use case and will enable a lot of existing VMware DVS
   users
   to start to use OpenStack.
  
   For https://review.openstack.org/#/c/102720/ we have the following
 which
   we
   can post when the internal CI for the NSX-v is ready (we are currently
   working on this):
- core plugin functionality
- layer 3 support
- security group support
  
  Do we need to approve all the under the umbrella specs as well?
 
   Thanks
   Gary
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-21 Thread Luke Gorrie
Howdy!

I am writing to request voting rights for the Tail-f CI account. This runs
tests for the Tail-f NCS ML2 mechanism driver in Neutron.

This account has been non-votingly testing ML2 changes and posting results
since June 10th. It has made around 500 test runs in that time. I am
monitoring its operation daily.

The recent changes that it has posted results for are here:
https://review.openstack.org/#/dashboard/9695

The top level logs directory is here:
http://openstack-ci.tail-f.com:81/html/ci-logs/

We reviewed its output in the 3rd party meeting last week. Two issues were
raised:
- Using an IP address instead of a DNS name. That's now corrected.
- Running only a small set of Tempest tests. I'm working on expanding that
(in a separate staging environment.)

The account has a rich history :-). Initially we brought it online back
around Nov 2013 early in the Icehouse cycle. That didn't work out so well:
we had a bunch of operational issues and as OpenStack newbies we were
oblivious to the impact they had on other people's workflows -- we were
mortified to learn that we had created a disruption. Since then we have
been more conservative which is why the account was mostly idle until June.

I reckon we have a pretty good understanding of the expectations on CI
operators now and I would like to enable the voting permission.

I have recently developed a new CI daemon that I hope to migrate this
account over to in the future: https://github.com/SnabbCo/shellci.

Cheers,
-Luke

NB: Last week we had a half-dozen or so errors due to the infamous ansible
versioning issue. I didn't retrigger all of those since this was a
widespread issue and our CI wasn't voting.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] VMware DVS support

2014-07-21 Thread Kyle Mestery
OK, lets go with that, though given how packed Juno-3 is, I'd lean
towards working out the kinks and architecture here and then we circle
back for Kilo and get this work approved and moving forward. We can
chat in August if by chance major progress is made.

Thanks,
Kyle

On Mon, Jul 21, 2014 at 2:19 PM, Armando M. arma...@gmail.com wrote:
 That would be my thinking as well, but if we managed to make an impressive
 progress from now until the Feature Freeze proposal deadline, I'd be willing
 to reevaluate the situation.

 A.


 On 21 July 2014 12:13, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Jul 21, 2014 at 2:03 PM, Armando M. arma...@gmail.com wrote:
  I think the specs under the umbrella one can be approved/treated
  individually.
 
  The umbrella one is an informational blueprint, there is not going to be
  code associated with it, however before approving it (and the individual
  ones) we'd need all the parties interested in vsphere support for
  Neutron to
  reach an agreement as to what the code will look like so that the
  individual
  contributions being proposed are not going to clash with each other or
  create needless duplication.
 
 That's what I was thinking as well. So, given where we're at in Juno,
 I'm leaning towards having all of this consensus building happen now
 and we can start the Kilo cycle with these BPs in agreement from all
 contributors.

 Does that sound ok?

 Thanks,
 Kyle

 
 
 
  On 21 July 2014 06:11, Kyle Mestery mest...@mestery.com wrote:
 
  On Sun, Jul 20, 2014 at 4:21 AM, Gary Kotton gkot...@vmware.com
  wrote:
   Hi,
   I would like to propose the following for spec freeze exception:
  
   https://review.openstack.org/#/c/105369
  
   This is an umbrella spec for a number of VMware DVS support specs.
   Each
   has
   its own unique use case and will enable a lot of existing VMware DVS
   users
   to start to use OpenStack.
  
   For https://review.openstack.org/#/c/102720/ we have the following
   which
   we
   can post when the internal CI for the NSX-v is ready (we are
   currently
   working on this):
- core plugin functionality
- layer 3 support
- security group support
  
  Do we need to approve all the under the umbrella specs as well?
 
   Thanks
   Gary
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Virtio-scsi settings nova-specs exception

2014-07-21 Thread Dan Smith
 We've already approved many other blueprints for Juno that involve features
 from new libvirt, so I don't think it is credible to reject this or any
 other feature that requires new libvirt in Juno.
 
 Furthermore this proposal for Nova is a targetted feature which is not
 enabled by default, so the risk of regression for people not using it
 is negligible. So I see no reason not to accept this feature.

Yep, the proposal that started this discussion was never aimed at
creating new test requirements for already-approved nova specs anyway. I
definitely don't think we need to hold up something relatively simple
like this on those grounds, given where we are in the discussion.

--Dan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-21 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 9:45 AM, Steve Gordon sgor...@redhat.com wrote:
 - Original Message -
 From: Brent Eagles beag...@redhat.com
 To: openstack-dev@lists.openstack.org

 Hi,

 A bug titled Creating quantum L2 networks (without subnets) doesn't
 work as expected (https://bugs.launchpad.net/nova/+bug/1039665) was
 reported quite some time ago. Beyond the discussion in the bug report,
 there have been related bugs reported a few times.

 * https://bugs.launchpad.net/nova/+bug/1304409
 * https://bugs.launchpad.net/nova/+bug/1252410
 * https://bugs.launchpad.net/nova/+bug/1237711
 * https://bugs.launchpad.net/nova/+bug/1311731
 * https://bugs.launchpad.net/nova/+bug/1043827

 BZs on this subject seem to have a hard time surviving. The get marked
 as incomplete or invalid, or in the related issues, the problem NOT
 related to the feature is addressed and the bug closed. We seem to dance
 around actually getting around to implementing this. The multiple
 reports show there *is* interest in this functionality but at the moment
 we are without an actual implementation.

 At the moment there are multiple related blueprints:

 Following up with post SAD status:

 * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
   extension support

 Remains unapproved, no negative feedback on current revision.

 * https://review.openstack.org/#/c/106222/ Add Port Security
   Implementation in ML2 Plugin

 Has a -2 to highlight the significant overlap with 99873 above.

 * https://review.openstack.org/#/c/97715 NFV unaddressed interfaces

 Remains unapproved, no negative feedback on current revision.

 Although there were some discussions about these last week I am not sure we 
 reached consensus on whether either of these (or even both of them) are the 
 correct path forward - particularly to address the problem Brent raised 
 w.r.t. to creation of networks without subnets - I believe this currently 
 still works with nova-network?

 Regardless, I am wondering if either of the spec authors intend to propose 
 these for a spec freeze exception?

For the port security implementation in ML2, I've had one of the
authors reach out to me. I'd like them to send an email to the
openstack-dev ML though, so we can have the discussion here. For the
NFV unaddressed interfaces, I've not had anyone reach out to me yet.

Thanks,
Kyle

 Thanks,

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] network_rpcapi.migrate_instance_start

2014-07-21 Thread Nachi Ueno
Hi nova folks

QQ: Who uses migrate_instance_start/finish, and why we need this rpc call?

I greped code but I couldn't find implementation for it.

https://github.com/openstack/nova/blob/372c54927ab4f6c226f5a1a2aead40b89617cf77/nova/network/manager.py#L1683

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-21 Thread Jay Pipes

On 07/21/2014 02:03 PM, Clint Byrum wrote:

Thanks Matthew for the analysis.

I think you missed something though.

Right now the frustration is that unrelated intermittent bugs stop your
presumably good change from getting in.

Without gating, the result would be that even more bugs, many of them not
intermittent at all, would get in. Right now, the one random developer
who has to hunt down the rechecks and do them is inconvenienced. But
without a gate, _every single_ developer will be inconvenienced until
the fix is merged.

The false negative rate is _way_ too high. Nobody would disagree there.
However, adding more false negatives and allowing more people to ignore
the ones we already have, seems like it would have the opposite effect:
Now instead of annoying the people who hit the random intermittent bugs,
we'll be annoying _everybody_ as they hit the non-intermittent ones.


+10

Best,
-jay


Excerpts from Matthew Booth's message of 2014-07-21 03:38:07 -0700:

On Friday evening I had a dependent series of 5 changes all with
approval waiting to be merged. These were all refactor changes in the
VMware driver. The changes were:

* VMware: DatastorePath join() and __eq__()
https://review.openstack.org/#/c/103949/

* VMware: use datastore classes get_allowed_datastores/_sub_folder
https://review.openstack.org/#/c/103950/

* VMware: use datastore classes in file_move/delete/exists, mkdir
https://review.openstack.org/#/c/103951/

* VMware: Trivial indentation cleanups in vmops
https://review.openstack.org/#/c/104149/

* VMware: Convert vmops to use instance as an object
https://review.openstack.org/#/c/104144/

The last change merged this morning.

In order to merge these changes, over the weekend I manually submitted:

* 35 rechecks due to false negatives, an average of 7 per change
* 19 resubmissions after a change passed, but its dependency did not

Other interesting numbers:

* 16 unique bugs
* An 87% false negative rate
* 0 bugs found in the change under test

Because we don't fail fast, that is an average of at least 7.3 hours in
the gate. Much more in fact, because some runs fail on the second pass,
not the first. Because we don't resubmit automatically, that is only if
a developer is actively monitoring the process continuously, and
resubmits immediately on failure. In practise this is much longer,
because sometimes we have to sleep.

All of the above numbers are counted from the change receiving an
approval +2 until final merging. There were far more failures than this
during the approval process.

Why do we test individual changes in the gate? The purpose is to find
errors *in the change under test*. By the above numbers, it has failed
to achieve this at least 16 times previously.

Probability of finding a bug in the change under test: Small
Cost of testing:   High
Opportunity cost of slowing development:   High

and for comparison:

Cost of reverting rare false positives:Small

The current process expends a lot of resources, and does not achieve its
goal of finding bugs *in the changes under test*. In addition to using a
lot of technical resources, it also prevents good change from making its
way into the project and, not unimportantly, saps the will to live of
its victims. The cost of the process is overwhelmingly greater than its
benefits. The gate process as it stands is a significant net negative to
the project.

Does this mean that it is worthless to run these tests? Absolutely not!
These tests are vital to highlight a severe quality deficiency in
OpenStack. Not addressing this is, imho, an existential risk to the
project. However, the current approach is to pick contributors from the
community at random and hold them personally responsible for project
bugs selected at random. Not only has this approach failed, it is
impractical, unreasonable, and poisonous to the community at large. It
is also unrelated to the purpose of gate testing, which is to find bugs
*in the changes under test*.

I would like to make the radical proposal that we stop gating on CI
failures. We will continue to run them on every change, but only after
the change has been successfully merged.

Benefits:
* Without rechecks, the gate will use 8 times fewer resources.
* Log analysis is still available to indicate the emergence of races.
* Fixes can be merged quicker.
* Vastly less developer time spent monitoring gate failures.

Costs:
* A rare class of merge bug will make it into master.

Note that the benefits above will also offset the cost of resolving this
rare class of merge bug.

Of course, we still have the problem of finding resources to monitor and
fix CI failures. An additional benefit of not gating on CI will be that
we can no longer pretend that picking developers for project-affecting
bugs by lottery is likely to achieve results. As a project we need to
understand the importance of CI failures. We need a proper negotiation
with contributors to 

Re: [openstack-dev] Virtio-scsi settings nova-specs exception

2014-07-21 Thread Sean Dague
On 07/21/2014 03:35 PM, Dan Smith wrote:
 We've already approved many other blueprints for Juno that involve features
 from new libvirt, so I don't think it is credible to reject this or any
 other feature that requires new libvirt in Juno.

 Furthermore this proposal for Nova is a targetted feature which is not
 enabled by default, so the risk of regression for people not using it
 is negligible. So I see no reason not to accept this feature.
 
 Yep, the proposal that started this discussion was never aimed at
 creating new test requirements for already-approved nova specs anyway. I
 definitely don't think we need to hold up something relatively simple
 like this on those grounds, given where we are in the discussion.
 
 --Dan

Agreed. This was mostly about figuring out a future path for ensuring
that the features that we say work in OpenStack either have some
validation behind them, or some appropriate disclaimers so that people
realize they aren't really tested in our normal system.

I'm fine with the virtio-scsi settings moving forward.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-21 Thread David Kranz

On 07/21/2014 04:13 PM, Jay Pipes wrote:

On 07/21/2014 02:03 PM, Clint Byrum wrote:

Thanks Matthew for the analysis.

I think you missed something though.

Right now the frustration is that unrelated intermittent bugs stop your
presumably good change from getting in.

Without gating, the result would be that even more bugs, many of them 
not

intermittent at all, would get in. Right now, the one random developer
who has to hunt down the rechecks and do them is inconvenienced. But
without a gate, _every single_ developer will be inconvenienced until
the fix is merged.

The false negative rate is _way_ too high. Nobody would disagree there.
However, adding more false negatives and allowing more people to ignore
the ones we already have, seems like it would have the opposite effect:
Now instead of annoying the people who hit the random intermittent bugs,
we'll be annoying _everybody_ as they hit the non-intermittent ones.


+10

Right, but perhaps there is a middle ground. We must not allow changes 
in that can't pass through the gate, but we can separate the problems
of constant rechecks using too many resources, and of constant rechecks 
causing developer pain. If failures were deterministic we would skip the 
failing tests until they were fixed. Unfortunately many of the common 
failures can blow up any test, or even the whole process. Following on 
what Sam said, what if we automatically reran jobs that failed in a 
known way, and disallowed recheck/reverify no bug? Developers would 
then have to track down what bug caused a failure or file a new one. But 
they would have to do so much less frequently, and as more common 
failures were catalogued it would become less and less frequent.


Some might (reasonably) argue that this would be a bad thing because it 
would reduce the incentive for people to fix bugs if there were less 
pain being inflicted. But given how hard it is to track down these race 
bugs, and that we as a community have no way to force time to be spent 
on them, and that it does not appear that these bugs are causing real 
systems to fall down (only our gating process), perhaps something 
different should be considered?


 -David


Best,
-jay


Excerpts from Matthew Booth's message of 2014-07-21 03:38:07 -0700:

On Friday evening I had a dependent series of 5 changes all with
approval waiting to be merged. These were all refactor changes in the
VMware driver. The changes were:

* VMware: DatastorePath join() and __eq__()
https://review.openstack.org/#/c/103949/

* VMware: use datastore classes get_allowed_datastores/_sub_folder
https://review.openstack.org/#/c/103950/

* VMware: use datastore classes in file_move/delete/exists, mkdir
https://review.openstack.org/#/c/103951/

* VMware: Trivial indentation cleanups in vmops
https://review.openstack.org/#/c/104149/

* VMware: Convert vmops to use instance as an object
https://review.openstack.org/#/c/104144/

The last change merged this morning.

In order to merge these changes, over the weekend I manually submitted:

* 35 rechecks due to false negatives, an average of 7 per change
* 19 resubmissions after a change passed, but its dependency did not

Other interesting numbers:

* 16 unique bugs
* An 87% false negative rate
* 0 bugs found in the change under test

Because we don't fail fast, that is an average of at least 7.3 hours in
the gate. Much more in fact, because some runs fail on the second pass,
not the first. Because we don't resubmit automatically, that is only if
a developer is actively monitoring the process continuously, and
resubmits immediately on failure. In practise this is much longer,
because sometimes we have to sleep.

All of the above numbers are counted from the change receiving an
approval +2 until final merging. There were far more failures than this
during the approval process.

Why do we test individual changes in the gate? The purpose is to find
errors *in the change under test*. By the above numbers, it has failed
to achieve this at least 16 times previously.

Probability of finding a bug in the change under test: Small
Cost of testing:   High
Opportunity cost of slowing development:   High

and for comparison:

Cost of reverting rare false positives:Small

The current process expends a lot of resources, and does not achieve 
its
goal of finding bugs *in the changes under test*. In addition to 
using a
lot of technical resources, it also prevents good change from making 
its

way into the project and, not unimportantly, saps the will to live of
its victims. The cost of the process is overwhelmingly greater than its
benefits. The gate process as it stands is a significant net 
negative to

the project.

Does this mean that it is worthless to run these tests? Absolutely not!
These tests are vital to highlight a severe quality deficiency in
OpenStack. Not addressing this is, imho, an existential risk to the
project. However, the current approach is 

[openstack-dev] [TripleO] No weekly meeting this week

2014-07-21 Thread James Polley
Due to the fact that the vast majority of the TripleO team are in a room
staring at me as I write this email, the weekly IRC meeting won't be
happening this week.

If you have any issues that need to be discussed this week, please bring
them up on IRC in #tripleo or via email.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-21 Thread Collins, Sean
From: Luke Gorrie l...@snabb.comailto:l...@snabb.co
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, July 21, 2014 3:22 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

The account has a rich history :-). Initially we brought it online back around 
Nov 2013 early in the Icehouse cycle. That didn't work out so well: we had a 
bunch of operational issues and as OpenStack newbies we were oblivious to the 
impact they had on other people's workflows -- we were mortified to learn that 
we had created a disruption. Since then we have been more conservative which is 
why the account was mostly idle until June.

The fact that I tried to reach out to the person who was listed as the contact 
back in November to try and resolve the –1 that this CI system gave, and never 
received a response until the public mailing list thread about revoking voting 
rights for Tail-F, makes me believe that the Tail-F CI system is still not 
ready to have that kind of privilege. Especially if the account was idle from 
around February, until June – that is a huge gap, if I understand correctly?

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-21 Thread Sean Dague
On 07/21/2014 04:39 PM, David Kranz wrote:
 On 07/21/2014 04:13 PM, Jay Pipes wrote:
 On 07/21/2014 02:03 PM, Clint Byrum wrote:
 Thanks Matthew for the analysis.

 I think you missed something though.

 Right now the frustration is that unrelated intermittent bugs stop your
 presumably good change from getting in.

 Without gating, the result would be that even more bugs, many of them
 not
 intermittent at all, would get in. Right now, the one random developer
 who has to hunt down the rechecks and do them is inconvenienced. But
 without a gate, _every single_ developer will be inconvenienced until
 the fix is merged.

 The false negative rate is _way_ too high. Nobody would disagree there.
 However, adding more false negatives and allowing more people to ignore
 the ones we already have, seems like it would have the opposite effect:
 Now instead of annoying the people who hit the random intermittent bugs,
 we'll be annoying _everybody_ as they hit the non-intermittent ones.

 +10

 Right, but perhaps there is a middle ground. We must not allow changes
 in that can't pass through the gate, but we can separate the problems
 of constant rechecks using too many resources, and of constant rechecks
 causing developer pain. If failures were deterministic we would skip the
 failing tests until they were fixed. Unfortunately many of the common
 failures can blow up any test, or even the whole process. Following on
 what Sam said, what if we automatically reran jobs that failed in a
 known way, and disallowed recheck/reverify no bug? Developers would
 then have to track down what bug caused a failure or file a new one. But
 they would have to do so much less frequently, and as more common
 failures were catalogued it would become less and less frequent.

Elastic Recheck was never meant for this purpose. It doesn't tell you
all the bugs that were in your job, it just tells you possibly 1 bug
that might have caused something to go wrong. There is no guaruntee
there weren't other bugs in there as well. Consider it a fail open solution.

 Some might (reasonably) argue that this would be a bad thing because it
 would reduce the incentive for people to fix bugs if there were less
 pain being inflicted. But given how hard it is to track down these race
 bugs, and that we as a community have no way to force time to be spent
 on them, and that it does not appear that these bugs are causing real
 systems to fall down (only our gating process), perhaps something
 different should be considered?

I really beg to differ on that point. The Infra team will tell you how
terribly unreliable our cloud providers can be at times, hitting many of
the same issues that we expose in elastic recheck.

Lightly loaded / basically static environments will hit some of these
issues at a far lower rate. They are still out there though. Probably
largely ignored through massive retry loops around our stuff.

Allocating a compute server that you can ssh to a dozen times in a test
run shouldn't be considered a moon shot level of function. That's kind
of table stakes for IaaS. :)

And yes, it's hard to debug, but seriously, if the development community
can't figure out why OpenStack doesn't work, can anyone?

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Update on specs we needed approved

2014-07-21 Thread Brandon Logan
In reference to these 3 specs:

TLS Termination - https://review.openstack.org/#/c/98640/
L7 Switching - https://review.openstack.org/#/c/99709/
Implementing TLS in reference Impl -
https://review.openstack.org/#/c/100931/

Kyle has +2'ed all three and once Mark Mcclain +2's them then one of
them will +A them.

Thanks again Kyle and Mark!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][rally] Application for a new OpenStack Program: Performance and Scalability

2014-07-21 Thread Boris Pavlovic
Hi Stackers and TC,

The Rally contributor team would like to propose a new OpenStack program
with a mission to provide scalability and performance benchmarking, and
code profiling tools for OpenStack components.

We feel we've achieved a critical mass in the Rally project, with an
active, diverse contributor team. The Rally project will be the initial
project in a new proposed Performance and Scalability program.

Below, the details on our proposed new program.

Thanks for your consideration,
Boris



[1] https://review.openstack.org/#/c/108502/


Official Name
=

Performance and Scalability

Codename


Rally

Scope
=

Scalability benchmarking, performance analysis, and profiling of
OpenStack components and workloads

Mission
===

To increase the scalability and performance of OpenStack clouds by:

* defining standard benchmarks
* sharing performance data between operators and developers
* providing transparency of code paths through profiling tools

Maturity


* Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
* IRC channel: #openstack-rally
* Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
check pipelines.
*  950 commits over last 10 months
* Large, diverse contributor community
 *
http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
 * http://stackalytics.com/report/contribution/rally/180

* Non official lead of project is Boris Pavlovic
 * Official election In progress.

Deliverables


Critical deliverables in the Juno cycle are:

* extending Rally Benchmark framework to cover all use cases that are
required by all OpenStack projects
* integrating OSprofiler in all core projects
* increasing functional  unit testing coverage of Rally.

Discussion
==

One of the major goals of Rally is to make it simple to share results of
standardized benchmarks and experiments between operators and
developers. When an operator needs to verify certain performance
indicators meet some service level agreement, he will be able to run
benchmarks (from Rally) and share with the developer community the
results along with his OpenStack configuration. These benchmark results
will assist developers in diagnosing particular performance and
scalability problems experienced with the operator's configuration.

Another interesting area is Rally  the OpenStack CI process. Currently,
working on performance issues upstream tends to be a more social than
technical process. We can use Rally in the upstream gates to identify
performance regressions and measure improvement in scalability over
time. The use of Rally in the upstream gates will allow a more rigorous,
scientific approach to performance analysis. In the case of an
integrated OSprofiler, it will be possible to get detailed information
about API call flows (e.g. duration of API calls in different services).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] [QA] Status update // SFE for deprecation of Baremetal

2014-07-21 Thread Devananda van der Veen
On Mon, Jul 21, 2014 at 11:15 AM, Dan Smith d...@danplanet.com wrote:

  In addition to seeking a spec-freeze-exception for 95025, I would also
  like some clarification of the requirement to test this upgrade
  path. Some nova-core folks have pointed out that they do not want to
  accept the nova.virt.ironic driver until the upgrade path from
  nova.virt.baremetal *has test coverage*, but what that means is not
  clear to me. It's been suggested that we use grenade (I am pretty sure
  Sean suggested this at the summit, and I wrote it into my spec proposal
  soon thereafter). After looking into grenade, I don't think it is the
  right tool to test with, and I'm concerned that no one pointed this out
  sooner.

 Grenade is our release test tool, so I think that, barring details, it's
 reasonable to use $GRENADE when talking about this sort of thing.


Grenade uses tempest to validate the old and new stacks. Unless Sean
and Matthew are willing to change that, this is a detail we can't ignore.


 I
 didn't realize that nova-bm doesn't work in devstack until you pointed
 it out in IRC last week. Since we're pretty good about requiring
 devstack support for new things like drivers, I would have expected
 nova-bm to work there, but obviously times were a bit different when
 that driver was merged.


  Philosophically, this isn't an upgrade of one service from version X to
  Y. It's a replacement of one nova driver with a completely different
  driver. As I understand it, that's not what grenade is for. But maybe
  I'm wrong on this, or maybe it's flexible.

 I think it's start devstack on release X, validate, do some work,
 re-start devstack on release Y, validate. I'm not sure that it's
 ill-suited for this, but IANAGE.




  I also have a technical objection: even if devstack can start and
  properly configure nova.virt.baremteal (which I doubt, because it isn't
  tested at all), it is going to fail the tempest/api/compute test suite
  horribly. The baremetal driver never passed tempest, and never had
  devstack-gate support. This matters because grenade uses tempest to
  validate a stack pre- and post-upgrade. Therefore, since we know that
  the old code is going to fail tempest, requiring grenade testing as a
  precondition to accepting the ironic driver effectively means we need to
  go develop the baremetal driver to a point it could pass tempest. I'm
  going to assume no one is actually suggesting that, and instead believe
  that none of us thought this through.
 
  (FWIW, Ironic doesn't pass the tempest/api/compute suite today, but
  we're working hard on it.)

 Do the devstack exercises pass?


A few of them passed, once upon a time, but the whole suite? It never
passed on the baremetal driver for me. And it was never run or maintained
in the gate.


 We test things like cells today (/me
 hears sdague scream in the background), which don't pass tempest, using
 the exercises to make sure it's at least able to create an instance.


  So, I'd like to ask for suggestions on what sort of upgrade testing is
  reasonable here. I'll toss out two ideas:
  - load some fake data into the nova_bm schema, run the upgrade scripts,
  start ironic, issue some API queries, and see if the data's correct
  - start devstack, load some real data into the nova_bm schema, run the
  upgrade scripts, then try to deploy an instance with ironic

 These were my suggestions last week, so I'll own up to them now.
 Obviously I think that something using grenade that goes from a
 functional environment on release X to a functional environment on
 release Y is best. However, I of course don't think it makes sense to
 spend a ton of time getting nova-bm to pass tempest just so we can shoot
 it in the head.


I'm glad to hear that, since everything up to this point in your reply
seems to indicate that we should go back and add test coverage (whether
tempest or exercise.sh) for the very code we are trying to delete.

So my question remains. Even for option #2, while we can load some real
data into the nova_bm schema, since we can't do any functional testing on
it today, I don't think we should be expected to go fix things to make that
pass. This leaves us in the position of running tempest only once -- on the
result of the migration. Is that sufficient from your perspective?




 I'm not really sure what to do here. I think that we need an upgrade
 path, and that it needs to be tested. I don't think our users would
 appreciate us removing any other virt driver and replacing it with a new
 one, avoiding an upgrade path because it's a different driver now. I
 also don't want to spend a bunch of time on nova-bm, which we have
 already neglected in our other test requirements (which is maybe part of
 the problem here).


Yea. Nova has kept the baremetal driver in tree with no testing whatsoever
far beyond its relevance, hoping for Ironic to come along and replace it --
except no one was maintaining it, and the only user (TripleO) is eagerly

Re: [openstack-dev] [Nova] [Ironic] [QA] Status update // SFE for deprecation of Baremetal

2014-07-21 Thread Devananda van der Veen
On Mon, Jul 21, 2014 at 3:13 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 Yea. Nova has kept the baremetal driver in tree with no testing whatsoever
 far beyond its relevance, hoping for Ironic to come along and replace it --
 except no one was maintaining it, and the only user (TripleO) is eagerly
 moving to Ironic and doesn't care about a migration path.


and by whatsoever I mean with devstack...
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pbr 0.10.0 released

2014-07-21 Thread Doug Hellmann
The Oslo team is pleased to announce the release of pbr 0.10.0, the
latest version of our setuptools wrapper for packaging python code.

0.10.0 includes:

* Remove all 2.7 filtering
* Stop filtering out argparse
* Remove mirror testing from the integration script

(Those first 2 changes are related to the argparse issue we have seen
recently under python 2.6. https://bugs.launchpad.net/pbr/+bug/1346357)

Please report any issues using the bug tracker:
https://bugs.launchpad.net/pbr

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-21 Thread Matthew Treinish

Hi Everyone,

I would like to propose 2 changes to the Tempest core team:

First, I'd like to nominate Andrea Fritolli to the Tempest core team. Over the
past cycle Andrea has been steadily become more actively engaged in the Tempest
community. Besides his code contributions around refactoring Tempest's
authentication and credentials code, he has been providing reviews that have
been of consistently high quality that show insight into both the project
internals and it's future direction. In addition he has been active in the
qa-specs repo both providing reviews and spec proposals, which has been very
helpful as we've been adjusting to using the new process. Keeping in mind that
becoming a member of the core team is about earning the trust from the members
of the current core team through communication and quality reviews, not simply a
matter of review numbers, I feel that Andrea will make an excellent addition to
the team.

As per the usual, if the current Tempest core team members would please vote +1
or -1(veto) to the nomination when you get a chance. We'll keep the polls open
for 5 days or until everyone has voted.

References:

https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z

http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-group


The second change that I'm proposing today is to remove Giulio Fidente from the
core team. He asked to be removed from the core team a few weeks back because he
is no longer able to dedicate the required time to Tempest reviews. So if there
are no objections to this I will remove him from the core team in a few days.
Sorry to see you leave the team Giulio...


Thanks,

Matt Treinish


pgpzajqsav8R6.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] [QA] Status update // SFE for deprecation of Baremetal

2014-07-21 Thread Sean Dague
On 07/21/2014 06:13 PM, Devananda van der Veen wrote:
 On Mon, Jul 21, 2014 at 11:15 AM, Dan Smith d...@danplanet.com
 mailto:d...@danplanet.com wrote:
 
  In addition to seeking a spec-freeze-exception for 95025, I would also
  like some clarification of the requirement to test this upgrade
  path. Some nova-core folks have pointed out that they do not want to
  accept the nova.virt.ironic driver until the upgrade path from
  nova.virt.baremetal *has test coverage*, but what that means is not
  clear to me. It's been suggested that we use grenade (I am pretty sure
  Sean suggested this at the summit, and I wrote it into my spec
 proposal
  soon thereafter). After looking into grenade, I don't think it is the
  right tool to test with, and I'm concerned that no one pointed
 this out
  sooner.
 
 Grenade is our release test tool, so I think that, barring details, it's
 reasonable to use $GRENADE when talking about this sort of thing. 
 
 
 Grenade uses tempest to validate the old and new stacks. Unless Sean
 and Matthew are willing to change that, this is a detail we can't ignore.

Old and new are just symbols, they can be anything you like. It is just
really 2 trees with 2 configs. For a bit of time after the juno release
we were accidentally upgrading from juno to juno, the tool chain didn't
really care (which actually made it a little harder to realize that we
borked up branch selection).

 I
 didn't realize that nova-bm doesn't work in devstack until you pointed
 it out in IRC last week. Since we're pretty good about requiring
 devstack support for new things like drivers, I would have expected
 nova-bm to work there, but obviously times were a bit different when
 that driver was merged. 
 
 
  Philosophically, this isn't an upgrade of one service from version
 X to
  Y. It's a replacement of one nova driver with a completely different
  driver. As I understand it, that's not what grenade is for. But maybe
  I'm wrong on this, or maybe it's flexible.
 
 I think it's start devstack on release X, validate, do some work,
 re-start devstack on release Y, validate. I'm not sure that it's
 ill-suited for this, but IANAGE. 
 
  
 
 
  I also have a technical objection: even if devstack can start and
  properly configure nova.virt.baremteal (which I doubt, because it
 isn't
  tested at all), it is going to fail the tempest/api/compute test suite
  horribly. The baremetal driver never passed tempest, and never had
  devstack-gate support. This matters because grenade uses tempest to
  validate a stack pre- and post-upgrade. Therefore, since we know that
  the old code is going to fail tempest, requiring grenade testing as a
  precondition to accepting the ironic driver effectively means we
 need to
  go develop the baremetal driver to a point it could pass tempest. I'm
  going to assume no one is actually suggesting that, and instead
 believe
  that none of us thought this through.
 
  (FWIW, Ironic doesn't pass the tempest/api/compute suite today, but
  we're working hard on it.)
 
 Do the devstack exercises pass? 
 
 
 A few of them passed, once upon a time, but the whole suite? It never
 passed on the baremetal driver for me. And it was never run or
 maintained in the gate.
  
 
 We test things like cells today (/me
 hears sdague scream in the background), which don't pass tempest, using
 the exercises to make sure it's at least able to create an instance. 

We don't even really know that... but that's a longer story. :)

Anyway, I veto devstack exercises as a test for this, they are
impossible to debug.

  So, I'd like to ask for suggestions on what sort of upgrade testing is
  reasonable here. I'll toss out two ideas:
  - load some fake data into the nova_bm schema, run the upgrade
 scripts,
  start ironic, issue some API queries, and see if the data's correct
  - start devstack, load some real data into the nova_bm schema, run the
  upgrade scripts, then try to deploy an instance with ironic
 
 These were my suggestions last week, so I'll own up to them now.
 Obviously I think that something using grenade that goes from a
 functional environment on release X to a functional environment on
 release Y is best. However, I of course don't think it makes sense to
 spend a ton of time getting nova-bm to pass tempest just so we can shoot
 it in the head.
 
 
 I'm glad to hear that, since everything up to this point in your reply
 seems to indicate that we should go back and add test coverage (whether
 tempest or exercise.sh) for the very code we are trying to delete.
 
 So my question remains. Even for option #2, while we can load some real
 data into the nova_bm schema, since we can't do any functional testing
 on it today, I don't think we should be expected to 

Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-21 Thread Sean Dague
On 07/21/2014 06:34 PM, Matthew Treinish wrote:
 
 Hi Everyone,
 
 I would like to propose 2 changes to the Tempest core team:
 
 First, I'd like to nominate Andrea Fritolli to the Tempest core team. Over the
 past cycle Andrea has been steadily become more actively engaged in the 
 Tempest
 community. Besides his code contributions around refactoring Tempest's
 authentication and credentials code, he has been providing reviews that have
 been of consistently high quality that show insight into both the project
 internals and it's future direction. In addition he has been active in the
 qa-specs repo both providing reviews and spec proposals, which has been very
 helpful as we've been adjusting to using the new process. Keeping in mind that
 becoming a member of the core team is about earning the trust from the members
 of the current core team through communication and quality reviews, not 
 simply a
 matter of review numbers, I feel that Andrea will make an excellent addition 
 to
 the team.
 
 As per the usual, if the current Tempest core team members would please vote 
 +1
 or -1(veto) to the nomination when you get a chance. We'll keep the polls open
 for 5 days or until everyone has voted.
 
 References:
 
 https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z
 
 http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-group
 
 
 The second change that I'm proposing today is to remove Giulio Fidente from 
 the
 core team. He asked to be removed from the core team a few weeks back because 
 he
 is no longer able to dedicate the required time to Tempest reviews. So if 
 there
 are no objections to this I will remove him from the core team in a few days.
 Sorry to see you leave the team Giulio...
 
 
 Thanks,
 
 Matt Treinish

+1

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] Zuul trigger not starting Jenkins jobs

2014-07-21 Thread Steven Weston
On 7/21/14, 3:13 AM, daya kamath wrote:
hi steve,
thanks a lot for following up! i'm based out of india, so there's not much 
overlap in timezones. i'll unicast you for next steps. wanted to post the info 
you asked for in this thread.
the 2 files are here - Paste #87382 | 
LodgeIt!http://paste.openstack.org/show/87382/












Paste #87382 | LodgeIt!http://paste.openstack.org/show/87382/
examples.yaml -  - 
job-template: name: 'noop-check-communication' node: '{node}' builders: - 
shell: | #!/bin/bash -xe echo Hello world, this is the {vendor} Testing 
System - job-tem...



View on paste.openstack.orghttp://paste.openstack.org/show/87382/

Preview by Yahoo






i just have some customizations to devstack-gate script, but the overall 
framework more or less intact as cloned from 
https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_master.sh. 
not using nodepools currently, just 1 master and 1 slave node.

thanks!



From: Steven Weston swes...@brocade.commailto:swes...@brocade.com
To: daya kamath day...@yahoo.commailto:day...@yahoo.com; OpenStack 
Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Sent: Monday, July 21, 2014 1:53 PM
Subject: Re: [openstack-dev] [third-party] Zuul trigger not starting Jenkins 
jobs

On 7/20/14, 5:25 PM, daya kamath wrote:

all,
Need some pointers on debugging what the issue is. its not very convenient for 
me to be on the IRC due to timezone issues, so hoping the mailing list is a 
good next best option..
when i post a patch on the sandbox project, i see a review indicating my system 
is starting 'check' jobs, but i dont see any activity in Jenkins for the job. i 
can run the job manually from the master.

tia!
daya
-
output from review.openstack.org -


IBM Neutron Testing
Jul 14 3:33 PM
Patch Set 1:
Starting check jobs. http://127.0.0.1/zuul/status

output log from Zuul debug-
Paste #86642 | LodgeIt!http://paste.openstack.org/show/86642/












Paste #86642 | LodgeIt!http://paste.openstack.org/show/86642/
debug log - 2014-07-16 07:57:57,077 INFO zuul.Gerrit: Updating information for 
106722,1 2014-07-16 07:57:57,936 DEBUG zuul.Gerrit: Change Change 
0x7f0f4e5d64d0 106722,1 status: NEW 2014-07-16 07:57:57,936 DEBUG 
zuul.Scheduler: Adding trigger event: TriggerEvent...



View on paste.openstack.orghttp://paste.openstack.org/show/86642/

Preview by Yahoo






(configuration shows the job mapping properly, and its receiving the triggers 
from the upstram, but these are not firing any Jenkins jobs)

The Jenkins master connection to Gearman is showing status as ok.

gearman status command output -

status
build:noop-check-communication:master   0   0   2
build:dsvm-tempest-full 0   0   2
build:dsvm-tempest-full:devstack_slave  0   0   2
merger:merge0   0   1
build:ibm-dsvm-tempest-full 0   0   2
zuul:get_running_jobs   0   0   1
set_description:9.126.153.171   0   0   1
build:ibm-dsvm-tempest-full:devstack_slave  0   0   2
stop:9.126.153.171  0   0   1
zuul:promote0   0   1
build:noop-check-communication  0   0   2
zuul:enqueue0   0   1
merger:update   0   0   1




Hi Daya,

I did ping you back in IRC last week; however you, unfortunately had already 
signed off.  I have tried to ping you several times since, but every time I 
have checked you have not been online.

In my experience, this issue has been caused by a mismatch in the jobs 
configured in the Zuul pipelines and those configured in Jenkins.  Can you post 
your Jenkins jobs builder files (your projects.yaml file and the yaml file 
which you defined the ibm-dsvm-dempest-full job in?  Also, please post your 
zuul.conf file and your layout.yaml files as well.

Please feel free to follow up with me at 
swes...@brocade.commailto:swes...@brocade.com.  I will be happy to continue 
our discussion over email.

Thanks,
Steve Weston
OpenStack Software Engineer



Daya,

Everything looks correct to me.  Curious, however, that you only have one slave 
and you have two workers registered in the gearman server.  If you connect to 
the gearman server and execute the workers command, what do you get as output?

I might suggest, at this point, the following:
1. Disable your gearman-jenkins plugin.
2.  Shut down your Jenkins service.
3.  On your Jenkins master, cd into /var/lib/jenkins/plugins and rm -rf 
gearman-plugin*
4.  Start the Jenkins service, verify the plugin is removed, then shut it back 
down.
5.  cd /var/lib/jenkins/plugins  wget 
http://tarballs.openstack.org/ci/gearman-plugin.hp
6.  Start the Jenkins service again.
7.  Make sure you reconnect to the Gearman server.

This has resolved many issues I've had in getting Jenkins to talk to Gearmand.

Steve

Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Jay Lau
Thanks Chris and Sylvain.

@Chris, yes,my case is do a select_destination call, and then call the
create/rebuild/migrate/evacuate while specifying the selected destination

@Sylvain, I was also thinking of Gantt, but as you said, Gantt might be
available in K or L which might be a bit late, that's why I say I want to
first do it in nova then migrate to Gantt. OK, agree with you, considering
the spec is freeze now, I will consider this in K or L and find a
workaround for now. ;-)

Thanks.


2014-07-22 1:13 GMT+08:00 Sylvain Bauza sba...@redhat.com:

  Le 21/07/2014 17:52, Jay Lau a écrit :

 Sorry, correct one typo. I mean Promote select_destination as a REST API



 -1 to it. During last Summit, we agreed on externalizing current Scheduler
 code into a separate project called Gantt. For that, we agreed on first
 doing necessary changes within the Scheduler before recreating a new
 repository.

 By providing select_destinations as a new API endpoint, it would create a
 disruptive change where the Scheduler would have a new entrypoint.

 As this change would need a spec anyway and as there is a Spec Freeze now
 for Juno, I propose to delay this proposal until Gantt is created and
 propose a REST API for Gantt instead (in Kilo or L)

 -Sylvain


 2014-07-21 23:49 GMT+08:00 Jay Lau jay.lau@gmail.com:

  Now in OpenStack Nova, select_destination is used by
 create/rebuild/migrate/evacuate VM when selecting target host for those
 operations.

  There is one requirement that some customers want to get the possible
 host list when create/rebuild/migrate/evacuate VM so as to create a
 resource plan for those operations, but currently select_destination is not
 a REST API, is it possible that we promote this API to be a REST API?

 --
  Thanks,

  Jay




 --
  Thanks,

  Jay


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-21 Thread David Kranz
+1

On Jul 21, 2014, at 6:37 PM, Matthew Treinish mtrein...@kortar.org wrote:

 
 Hi Everyone,
 
 I would like to propose 2 changes to the Tempest core team:
 
 First, I'd like to nominate Andrea Fritolli to the Tempest core team. Over the
 past cycle Andrea has been steadily become more actively engaged in the 
 Tempest
 community. Besides his code contributions around refactoring Tempest's
 authentication and credentials code, he has been providing reviews that have
 been of consistently high quality that show insight into both the project
 internals and it's future direction. In addition he has been active in the
 qa-specs repo both providing reviews and spec proposals, which has been very
 helpful as we've been adjusting to using the new process. Keeping in mind that
 becoming a member of the core team is about earning the trust from the members
 of the current core team through communication and quality reviews, not 
 simply a
 matter of review numbers, I feel that Andrea will make an excellent addition 
 to
 the team.
 
 As per the usual, if the current Tempest core team members would please vote 
 +1
 or -1(veto) to the nomination when you get a chance. We'll keep the polls open
 for 5 days or until everyone has voted.
 
 References:
 
 https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z
 
 http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-group
 
 
 The second change that I'm proposing today is to remove Giulio Fidente from 
 the
 core team. He asked to be removed from the core team a few weeks back because 
 he
 is no longer able to dedicate the required time to Tempest reviews. So if 
 there
 are no objections to this I will remove him from the core team in a few days.
 Sorry to see you leave the team Giulio...
 
 
 Thanks,
 
 Matt Treinish
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-21 Thread Masayuki Igawa
+1 !

On Jul 22, 2014 7:36 AM, Matthew Treinish mtrein...@kortar.org wrote:


 Hi Everyone,

 I would like to propose 2 changes to the Tempest core team:

 First, I'd like to nominate Andrea Fritolli to the Tempest core team.
Over the
 past cycle Andrea has been steadily become more actively engaged in the
Tempest
 community. Besides his code contributions around refactoring Tempest's
 authentication and credentials code, he has been providing reviews that
have
 been of consistently high quality that show insight into both the project
 internals and it's future direction. In addition he has been active in the
 qa-specs repo both providing reviews and spec proposals, which has been
very
 helpful as we've been adjusting to using the new process. Keeping in mind
that
 becoming a member of the core team is about earning the trust from the
members
 of the current core team through communication and quality reviews, not
simply a
 matter of review numbers, I feel that Andrea will make an excellent
addition to
 the team.

 As per the usual, if the current Tempest core team members would please
vote +1
 or -1(veto) to the nomination when you get a chance. We'll keep the polls
open
 for 5 days or until everyone has voted.

 References:

 https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z


http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-group


 The second change that I'm proposing today is to remove Giulio Fidente
from the
 core team. He asked to be removed from the core team a few weeks back
because he
 is no longer able to dedicate the required time to Tempest reviews. So if
there
 are no objections to this I will remove him from the core team in a few
days.
 Sorry to see you leave the team Giulio...


 Thanks,

 Matt Treinish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to patch Horizon to change Shut Off Instance function?

2014-07-21 Thread Martinx - ジェームズ
Hello Stackers!

 I need to change the behavior of Shut Off Instance at Horizon, it needs
to gracefully halt the instance via ACPI, instead of just destroying it.

 How can I do that?!

Thanks!
Thiago
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally] Application for a new OpenStack Program: Performance and Scalability

2014-07-21 Thread Yingjun Li
Cool, Rally is really helpful for performance benchmarking and optimizing for 
our openstack cloud.

On Jul 22, 2014, at 5:53, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi Stackers and TC,
 
 The Rally contributor team would like to propose a new OpenStack program
 with a mission to provide scalability and performance benchmarking, and
 code profiling tools for OpenStack components.
 
 We feel we've achieved a critical mass in the Rally project, with an
 active, diverse contributor team. The Rally project will be the initial
 project in a new proposed Performance and Scalability program.
 
 Below, the details on our proposed new program.
 
 Thanks for your consideration,
 Boris
 
 
 
 [1] https://review.openstack.org/#/c/108502/
 
 
 Official Name
 =
 
 Performance and Scalability
 
 Codename
 
 
 Rally
 
 Scope
 =
 
 Scalability benchmarking, performance analysis, and profiling of
 OpenStack components and workloads
 
 Mission
 ===
 
 To increase the scalability and performance of OpenStack clouds by:
 
 * defining standard benchmarks
 * sharing performance data between operators and developers
 * providing transparency of code paths through profiling tools
 
 Maturity
 
 
 * Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
 * IRC channel: #openstack-rally
 * Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
 check pipelines.
 *  950 commits over last 10 months
 * Large, diverse contributor community
  * 
 http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
  * http://stackalytics.com/report/contribution/rally/180
 
 * Non official lead of project is Boris Pavlovic
  * Official election In progress.
 
 Deliverables
 
 
 Critical deliverables in the Juno cycle are:
 
 * extending Rally Benchmark framework to cover all use cases that are
 required by all OpenStack projects
 * integrating OSprofiler in all core projects
 * increasing functional  unit testing coverage of Rally.
 
 Discussion
 ==
 
 One of the major goals of Rally is to make it simple to share results of
 standardized benchmarks and experiments between operators and
 developers. When an operator needs to verify certain performance
 indicators meet some service level agreement, he will be able to run
 benchmarks (from Rally) and share with the developer community the
 results along with his OpenStack configuration. These benchmark results
 will assist developers in diagnosing particular performance and
 scalability problems experienced with the operator's configuration.
 
 Another interesting area is Rally  the OpenStack CI process. Currently,
 working on performance issues upstream tends to be a more social than
 technical process. We can use Rally in the upstream gates to identify
 performance regressions and measure improvement in scalability over
 time. The use of Rally in the upstream gates will allow a more rigorous,
 scientific approach to performance analysis. In the case of an
 integrated OSprofiler, it will be possible to get detailed information
 about API call flows (e.g. duration of API calls in different services).
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - certificates data persistency

2014-07-21 Thread Stephen Balukoff
Evgeny--

The only reason I see for storing certificate information in Neutron (and
not private key information-- just the certificate) is to aid in presenting
UI information to the user. Especially GUI users don't care about a
certificate's UUID, they care about which hostnames it's valid for. Yes,
this can be loaded on the fly whenever public certificate information is
accessed, but the perception was that it would be a significant performance
increase to cache it.

Stephen


On Sun, Jul 20, 2014 at 4:32 AM, Evgeny Fedoruk evge...@radware.com wrote:

  Hi folks,



 In a current version of TLS capabilities RST certificate SubjectCommonName
 and SubjectAltName information is cached in a database.

 This may be not necessary and here is why:



 1.   TLS containers are immutable, meaning once a container was
 associated to a listener and was validated, it’s not necessary to validate
 the container anymore.
 This is relevant for both, default container and containers used for SNI.

 2.   LBaaS front-end API can check if TLS containers ids were changed
 for a listener as part of an update operation. Validation of containers
 will be done for
 new containers only. This is stated in “Performance Impact” section of the
 RST, excepting the last statement that proposes persistency for SCN and SAN.

 3.   Any interaction with Barbican API for getting containers data
 will be performed via a common module API only. This module’s API is
 mentioned in
 “SNI certificates list management” section of the RST.

 4.   In case when driver really needs to extract certificate
 information prior to the back-end system provisioning, it will do it via
 the common module API.

 5.   Back-end provisioning system may cache any certificate data,
 except private key, in case of a specific need of the vendor.



 IMO, There is no real need to store certificates data in Neutron database
 and manage its life cycle.

 Does anyone sees a reason why caching certificates’ data in Neutron
 database is critical?



 Thank you,

 Evg




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Update on specs we needed approved

2014-07-21 Thread Stephen Balukoff
Yes, thanks guys! These are really important for features we want to get
into Neutron LBaaS in Juno! :D


On Mon, Jul 21, 2014 at 2:42 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 In reference to these 3 specs:

 TLS Termination - https://review.openstack.org/#/c/98640/
 L7 Switching - https://review.openstack.org/#/c/99709/
 Implementing TLS in reference Impl -
 https://review.openstack.org/#/c/100931/

 Kyle has +2'ed all three and once Mark Mcclain +2's them then one of
 them will +A them.

 Thanks again Kyle and Mark!


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] PTL Candidacy

2014-07-21 Thread Yingjun Li
+1

On Jul 22, 2014, at 2:38, Boris Pavlovic bpavlo...@mirantis.com wrote:

 Hi, 
 
 I would like to propose my candidacy for Rally PTL.
 
 I started this project to make benchmarking of OpenStack simple as possible. 
 This means not only load generation, but as well OpenStack specific benchmark 
 framework, data analyze and integration with gates. All these things should 
 make it simple for developers and operators to benchmark (perf, scale, stress 
 test) OpenStack, share experiments  results, and have a fast way to find 
 what produce bottleneck or just to ensure that OpenStack works well under 
 load that they are expecting. 
 
 I am current non official PTL and in my responsibilities are such things like:
 1) Adoption of Rally architecture to cover everybody's use cases
 2) Building  managing work of community
 3) Writing a lot of code
 4) Working on docs  wiki 
 5) Helping newbies to join Rally team 
 
 As a PTL I would like to continue work and finish my initial goal:
 1) Ensure that everybody's use cases are fully covered
 2) There is no monopoly in project
 3) Run Rally in gates of all OpenStack projects (currently we have check jobs 
 in Keystone, Cinder, Glance  Neutron)
 4) Continue work on making project more mature. It covers such topics like 
 increasing unit and functional test coverage and making Rally absolutely safe 
 to run against any production cloud)
 
 
 Best regards,
 Boris Pavlovic
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] PTL Candidacy

2014-07-21 Thread Duncan Thomas
On 21 July 2014 21:38, Boris Pavlovic bpavlo...@mirantis.com wrote:
 Hi,

 I would like to propose my candidacy for Rally PTL.

I've been working  with Boris on both Rally and the associated
OSProfiler code, and I can confirm he is dedicated, very open to ideas
and contributions, and I heartily recommend him for PTL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova list Question

2014-07-21 Thread TAO ZHOU
1. What do you mean by
When I setup a openstack node, will it have the output of “nova list”?
2. Your output looks normal.



On Mon, Jul 21, 2014 at 8:15 PM, Johnson Cheng 
johnson.ch...@qsantechnology.com wrote:

  Dear All,



 When I setup a openstack node, will it have the output of “nova list”?



 Here is my output of “nova image-list”,


 +--+-+++

 | ID   | Name| Status |
 Server |


 +--+-+++

 | e22d8a77-d3ad-458a-a073-aea8b185be22 | cirros-0.3.2-x86_64 | SAVING
 ||


 +--+-+++



 But the output of “nova list” is empty,

 ++--+++-+--+

 | ID | Name | Status | Task State | Power State | Networks |

 ++--+++-+--+

 ++--+++-+--+



 Is it correct?

 I want to use it to attach a cinder volume.





 Regards,

 Johnson



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Jay Pipes

On 07/21/2014 07:45 PM, Jay Lau wrote:

There is one requirement that some customers want to get the possible
host list when create/rebuild/migrate/evacuate VM so as to create a
resource plan for those operations, but currently select_destination is
not a REST API, is it possible that we promote this API to be a REST API?


Which customers want to get the possible host list?

/me imagines someone asking Amazon for a REST API that returned all the 
possible servers that might be picked for placement... and what answer 
Amazon might give to the request.


If by customer, you are referring to something like IBM Smart Cloud 
Orchestrator, then I don't really see the point of supporting something 
like this. Such a customer would only need to create a resource plan 
for those operations if it was wholly supplanting large pieces of 
OpenStack infrastructure, including parts of Nova and much of Heat.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova list Question

2014-07-21 Thread Anne Gentle
On Mon, Jul 21, 2014 at 7:15 AM, Johnson Cheng 
johnson.ch...@qsantechnology.com wrote:

  Dear All,



 When I setup a openstack node, will it have the output of “nova list”?



 Here is my output of “nova image-list”,


 +--+-+++

 | ID   | Name| Status |
 Server |


 +--+-+++

 | e22d8a77-d3ad-458a-a073-aea8b185be22 | cirros-0.3.2-x86_64 | SAVING
 ||


 +--+-+++



 But the output of “nova list” is empty,

 ++--+++-+--+

 | ID | Name | Status | Task State | Power State | Networks |

 ++--+++-+--+

 ++--+++-+--+



 Is it correct?


This output is correct when you have not launched any images.

See http://docs.openstack.org/user-guide/content/cli_launch_instances.html


  I want to use it to attach a cinder volume.






Once you have an instance running, use these instructions to attach a
volume.

http://docs.openstack.org/user-guide/content/cli_manage_volumes.html#cli_attach_volume




  Regards,

 Johnson



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] About the BP: gateway-of-object-storage

2014-07-21 Thread Tong Yanqun
Hi all,

I registered a BP for the feature that makes Swift works as an object storage 
gateway [1]. And submit the specification [2] last Sunday. Could you give some 
review and opinion about it please?

First, I made a wrong commit of the spec [3]. Then I tried to modify and 
overwrite it, but commit another spec. Sorry about that mistake but I don??t 
know how to delete the wrong one. Please ignore it.

[1] https://blueprints.launchpad.net/swift/+spec/gateway-of-object-storage
[2] https://review.openstack.org/#/c/108230/
http://docs-draft.openstack.org/30/108230/1/check/gate-swift-specs-docs/34b8223/doc/build/html/specs/swift/gateway-of-object-storage.html
[3] https://review.openstack.org/#/c/108229/

Thanks and best regards!
Tong Yanqun___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

2014-07-21 Thread Jay Pipes

On 07/17/2014 03:07 AM, Tailor, Rajesh wrote:

Hi all,

Why glance is not using Launcher/ProcessLauncher (oslo-incubator) for
its wsgi service like it is used in other openstack projects i.e. nova,
cinder, keystone etc.


Glance uses the same WSGI service launch code as the other OpenStack 
project from which that code was copied: Swift.



As of now when SIGHUP signal is sent to glance-api parent process, it
calls the callback handler and then throws OSError.

The OSError is thrown because os.wait system call was interrupted due to
SIGHUP callback handler.

As a result of this parent process closes the server socket.

All the child processes also gets terminated without completing existing
api requests because the server socket is already closed and the service
doesn’t restart.

Ideally when SIGHUP signal is received by the glance-api process, it
should process all the pending requests and then restart the glance-api
service.

If (oslo-incubator) Launcher/ProcessLauncher is used in glance then it
will handle service restart on ‘SIGHUP’ signal properly.

Can anyone please let me know what will be the positive/negative impact
of using Launcher/ProcessLauncher (oslo-incubator) in glance?


Sounds like you've identified at least one good reason to move to 
oslo-incubator's Launcher/ProcessLauncher. Feel free to propose patches 
which introduce that change to Glance. :)



Thank You,

Rajesh Tailor
__
Disclaimer:This email and any attachments are sent in strictest
confidence for the sole use of the addressee and may contain legally
privileged, confidential, and proprietary data. If you are not the
intended recipient, please advise the sender by replying promptly to
this email and then delete and destroy this email and any attachments
without any further use, copying or forwarding


Please advise your corporate IT department that the above disclaimer on 
your emails is annoying, is entirely disregarded by 99.999% of the real 
world, has no legal standing or enforcement, and may be a source of 
problems with people's mailing list posts being sent into spam boxes.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-21 Thread Kenichi Oomichi
+1

Andrea has already worked well for Tempest.

Thanks
Ken'ichi Ohmichi

---

 -Original Message-
 From: Matthew Treinish [mailto:mtrein...@kortar.org]
 Sent: Tuesday, July 22, 2014 7:34 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [QA] Proposed Changes to Tempest Core
 
 
 Hi Everyone,
 
 I would like to propose 2 changes to the Tempest core team:
 
 First, I'd like to nominate Andrea Fritolli to the Tempest core team. Over the
 past cycle Andrea has been steadily become more actively engaged in the 
 Tempest
 community. Besides his code contributions around refactoring Tempest's
 authentication and credentials code, he has been providing reviews that have
 been of consistently high quality that show insight into both the project
 internals and it's future direction. In addition he has been active in the
 qa-specs repo both providing reviews and spec proposals, which has been very
 helpful as we've been adjusting to using the new process. Keeping in mind that
 becoming a member of the core team is about earning the trust from the members
 of the current core team through communication and quality reviews, not 
 simply a
 matter of review numbers, I feel that Andrea will make an excellent addition 
 to
 the team.
 
 As per the usual, if the current Tempest core team members would please vote 
 +1
 or -1(veto) to the nomination when you get a chance. We'll keep the polls open
 for 5 days or until everyone has voted.
 
 References:
 
 https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z
 
 http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-group
 
 
 The second change that I'm proposing today is to remove Giulio Fidente from 
 the
 core team. He asked to be removed from the core team a few weeks back because 
 he
 is no longer able to dedicate the required time to Tempest reviews. So if 
 there
 are no objections to this I will remove him from the core team in a few days.
 Sorry to see you leave the team Giulio...
 
 
 Thanks,
 
 Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-21 Thread Isaku Yamahata
On Mon, Jul 21, 2014 at 02:52:04PM -0500,
Kyle Mestery mest...@mestery.com wrote:

  Following up with post SAD status:
 
  * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
extension support
 
  Remains unapproved, no negative feedback on current revision.
 
  * https://review.openstack.org/#/c/106222/ Add Port Security
Implementation in ML2 Plugin
 
  Has a -2 to highlight the significant overlap with 99873 above.
 
  Although there were some discussions about these last week I am not sure we 
  reached consensus on whether either of these (or even both of them) are the 
  correct path forward - particularly to address the problem Brent raised 
  w.r.t. to creation of networks without subnets - I believe this currently 
  still works with nova-network?
 
  Regardless, I am wondering if either of the spec authors intend to propose 
  these for a spec freeze exception?
 
 For the port security implementation in ML2, I've had one of the
 authors reach out to me. I'd like them to send an email to the
 openstack-dev ML though, so we can have the discussion here.

As I commented at the gerrit, we, two authors of port security
(Shweta and me), have agreed that the blueprints/specs will be unified.
I'll send a mail for a spec freeze exception soon.

thanks,
-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-21 Thread Jay Lau
Hi Jay,

There are indeed some China customers want this feature because before they
do some operations, they want to check the action plan, such as where the
VM will be migrated or created, they want to use some interactive mode do
some operations to make sure no errors.

Thanks.


2014-07-22 10:23 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 07/21/2014 07:45 PM, Jay Lau wrote:

 There is one requirement that some customers want to get the possible
 host list when create/rebuild/migrate/evacuate VM so as to create a
 resource plan for those operations, but currently select_destination is
 not a REST API, is it possible that we promote this API to be a REST API?


 Which customers want to get the possible host list?

 /me imagines someone asking Amazon for a REST API that returned all the
 possible servers that might be picked for placement... and what answer
 Amazon might give to the request.

 If by customer, you are referring to something like IBM Smart Cloud
 Orchestrator, then I don't really see the point of supporting something
 like this. Such a customer would only need to create a resource plan for
 those operations if it was wholly supplanting large pieces of OpenStack
 infrastructure, including parts of Nova and much of Heat.

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Spec Freeze Exception] ml2-ovs-portsecurity

2014-07-21 Thread Isaku Yamahata

I'd like to request Juno spec freeze exception for ML2 OVS portsecurity
extension.

- https://review.openstack.org/#/c/99873/
  ML2 OVS: portsecurity extension support

- https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity
  Add portsecurity support to ML2 OVS mechanism driver

The spec/blueprint adds portsecurity extension to ML2 plugin and implements
it in ovs mechanism driver with iptables_firewall driver.
The spec has gotten 5 +1 with many respins.
This feature will be a basement to run network service within VM.

There is another spec whose goal is same.
- https://review.openstack.org/#/c/106222/
  Add Port Security Implementation in ML2 Plugin
The author, Shweta, and I have agreed to consolidate those specs/blueprints
and unite for the same goal.

Thanks,
-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone/swift] role-based access cotrol in swift

2014-07-21 Thread Osanai, Hisashi

Hi,

Thank you for the info.

On Monday, July 21, 2014 10:19 PM, Nassim Babaci wrote:

 * Adding policy engine support to Swift
 https://review.openstack.org/#/c/89568/
With the commit message in 89568, you have developed same function 
except supporting policy.json file format.

 My answer is may be a little bite late but here's a swift middleware we
 have just published: https://github.com/cloudwatt/swiftpolicy
 It is based on the keystoneauth middleware, and uses oslo.policy file
 format.
I would like to know the following points. Do you have info for them?
- difference b/w policy.json file format and oslo.policy file format
- relationship b/w  https://review.openstack.org/#/c/89568/; and 
  https://github.com/cloudwatt/swiftpolicy;

Best Regards,
Hisashi Osanai
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] PTL Candidacy

2014-07-21 Thread Joshua Harlow
+1

I've been helping out with the osprofiler code (and tiny parts of rally) and 
boris would be a welcome PTL imho.

-Josh

On Jul 21, 2014, at 11:38 AM, Boris Pavlovic bpavlo...@mirantis.com wrote:

 Hi, 
 
 I would like to propose my candidacy for Rally PTL.
 
 I started this project to make benchmarking of OpenStack simple as possible. 
 This means not only load generation, but as well OpenStack specific benchmark 
 framework, data analyze and integration with gates. All these things should 
 make it simple for developers and operators to benchmark (perf, scale, stress 
 test) OpenStack, share experiments  results, and have a fast way to find 
 what produce bottleneck or just to ensure that OpenStack works well under 
 load that they are expecting. 
 
 I am current non official PTL and in my responsibilities are such things like:
 1) Adoption of Rally architecture to cover everybody's use cases
 2) Building  managing work of community
 3) Writing a lot of code
 4) Working on docs  wiki 
 5) Helping newbies to join Rally team 
 
 As a PTL I would like to continue work and finish my initial goal:
 1) Ensure that everybody's use cases are fully covered
 2) There is no monopoly in project
 3) Run Rally in gates of all OpenStack projects (currently we have check jobs 
 in Keystone, Cinder, Glance  Neutron)
 4) Continue work on making project more mature. It covers such topics like 
 increasing unit and functional test coverage and making Rally absolutely safe 
 to run against any production cloud)
 
 
 Best regards,
 Boris Pavlovic
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >