Re: [openstack-dev] [Compass] Call for contributors

2015-08-13 Thread Jesse Pretorius
On 12 August 2015 at 17:23, Weidong Shao weidongs...@gmail.com wrote:


 Compass is not new to OpenStack community. We started it as an OpenStack
 deployment tool at the HongKong summit. We then showcased it at the Paris
 summit.

 However, the project has gone through some changes recently. We'd like to
 re-introduce Compass and welcome new developers to expand our efforts,
 share in its design, and advance its usefulness to the OpenStack community.

 We intend to follow the 4 openness guidelines and enter the Big Tent. We
 have had some feedback from TC reviewers and others and realize we have
 some work to do to get there. More developers interested in working on the
 project will get us there easier.

 Besides the openness Os, there is critical developer work we need to get
 to one of the OpenStack Os.  For example, we have forked Chef cookbooks,
 and Ansible written from scratch for OpenStack deployment. We need to merge
 the Compass Ansible playbooks back to openstack upstream repo
 (os-ansible-deployment).

 We also need to reach out to other related projects, such as Ironic, to
 make sure that where our efforts overlap, we provided added value, not
 different ways of doing the same thing.

 Lot of work we think will add to the OpenStack community.


- The project wiki page is at https://wiki.openstack.org/wiki/Compass
- The launchpad is: https://launchpad.net/compass
- The weekly IRC meeting is on openstack-meeting4 0100 Thursdays UTC
(or Wed 6pm PDT)
- Code repo is under stackforge
https://github.com/stackforge/compass-core
https://github.com/stackforge/compass-web
https://github.com/stackforge/compass-adapters

 Hi Weidong,

This looks like an excellent project and we (the openstack-ansible project)
love to assist you with the integration of Compass with openstack-ansible
(aka os-ansible-deployment).

I'd like to discuss with your team how we can work together to facilitate
Compass' consumption of the playbooks/roles we produce in a suitable way
and will try to attend the next meeting as I seem to have missed this
week's meeting). We'd like to understand the project's needs so that we can
work towards defined goals to accommodate them, while also maintaining our
stability for other downstream consumers.

We also invite you to attend our next meeting on Thu 16:00 UTC in
#openstack-meeting-4 - details are here for reference:
https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Community_Meeting

Looking forward to working with you!

Best regards,

Jesse
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest] Can we add testcase for download image v2?

2015-08-13 Thread Deore, Pranali11
Hi,

While going through the tempest code, I have found that image-download v2 api 
test is missing in tempest.
Can I add the api test for the same? Please suggest?

Also glance task import API related testcases are also not there in tempest. Is 
it ok if I add tests for the same?


Thanks

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ][third-party-ci]Running custom code before tests

2015-08-13 Thread Eduard Matei
Hi,

I think you pointed me to the wrong file, the devstack-gate yaml (and line
2201 contains timestamps).
I need an example of how to configure tempest to use my driver.

I tried EXPORT in the jenkins job (before executing dsvm shell script) but
looking at the tempest.txt (log) it shows that it still uses the defaults.
How do i overwrite those defaults?

Thanks,

Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-13 Thread David Chadwick
Hi Jamie

see

http://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-idp-discovery.pdf

regards
David

On 13/08/2015 02:06, Jamie Lennox wrote:
 
 
 - Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk To:
 openstack-dev@lists.openstack.org Sent: Thursday, 13 August, 2015
 3:06:46 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
 Federated Login
 
 
 
 On 11/08/2015 01:46, Jamie Lennox wrote:
 
 
 - Original Message -
 From: Jamie Lennox jamielen...@redhat.com To: OpenStack 
 Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org Sent: Tuesday, 11 August,
 2015 10:09:33 AM Subject: Re: [openstack-dev] [Keystone]
 [Horizon] Federated Login
 
 
 
 - Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk To: 
 openstack-dev@lists.openstack.org Sent: Tuesday, 11 August,
 2015 12:50:21 AM Subject: Re: [openstack-dev] [Keystone]
 [Horizon] Federated Login
 
 
 
 On 10/08/2015 01:53, Jamie Lennox wrote:
 
 
 - Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk To: 
 openstack-dev@lists.openstack.org Sent: Sunday, August
 9, 2015 12:29:49 AM Subject: Re: [openstack-dev]
 [Keystone] [Horizon] Federated Login
 
 Hi Jamie
 
 nice presentation, thanks for sharing it. I have
 forwarded it to my students working on federation aspects
 of Horizon.
 
 About public federated cloud access, the way you envisage
 it, i.e. that every user will have his own tailored
 (subdomain) URL to the SP is not how it works in the real
 world today. SPs typically provide one URL, which
 everyone from every IdP uses, so that no matter which
 browser you are using, from wherever you are in the
 world, you can access the SP (via your IdP). The only
 thing the user needs to know, is the name of his IdP, in
 order to correctly choose it.
 
 So discovery of all available IdPs is needed. You are
 correct in saying that Shib supports a separate discovery
 service (WAYF), but Horizon can also play this role, by
 listing the IdPs for the user. This is the mod that my
 student is making to Horizon, by adding type ahead
 searching.
 
 So my point at the moment is that unless there's something
 i'm missing in the way shib/mellon discovery works is that
 horizon can't. Because we forward to a common websso entry
 point there is no way (i know) for the users selection in
 horizon to be forwarded to keystone. You would still need a
 custom select your idp discovery page in front of
 keystone. I'm not sure if this addition is part of your
 students work, it just hasn't been mentioned yet.
 
 About your proposed discovery mod, surely this seems to
 be going in the wrong direction. A common entry point to 
 Keystone for all IdPs, as we have now with WebSSO, seems
 to be preferable to separate entry points per IdP. Which
 high street shop has separate doors for each user? Or
 have I misunderstood the purpose of your mod?
 
 The purpose of the mod is purely to bypass the need to have
 a shib/mellon discovery page on
 /v3/OS-FEDERATION/websso/saml2. This page is currently
 required to allow a user to select their idp (presumably
 from the ones supported by keystone) and redirect to that
 IDPs specific login page.
 
 There are two functionalities that are required: a) Horizon 
 finding the redirection login URL of the IdP chosen by the
 user b) Keystone finding which IdP was used for login.
 
 The second is already done by Apache telling Keystone in the 
 header field.
 
 The first is part of the metadata of the IdP, and Keystone
 should make this available to Horizon via an API call.
 Ideally when Horizon calls Keystone for the list of trusted
 IdPs, then the user friendly name of the IdP (to be displayed
 to the user) and the login page URL should be returned. Then
 Horizon can present the user friendly list to the user, get
 the login URL that matches this, then redirect the user to
 the IdP telling the IdP the common callback URL of Keystone.
 
 So my understanding was that this wasn't possible. Because we
 want to have keystone be the registered service provider and
 receive the returned SAML assertions the login redirect must be
 issued from keystone and not horizon. Is it possible to issue a
 login request from horizon that returns the response to
 keystone? This seems dodgy to me but may be possible if all the
 trust relationships are set up.
 
 Note also that currently this metadata including the login URL is
 not known by keystone. It's controlled by apache in the metadata
 xml files so we would have to add this information to keystone.
 Obviously this is doable just extra config setup that would
 require double handling of this URL.
 
 My idea is to use Horizon as the WAYF/Discovery service,
 approximately as follows
 
 1. The user goes to Horizon to login locally or to discover which 
 federated IdP to use 2. Horizon dynamically populates the list of
 IDPs by querying Keystone 3. The user chooses the IdP and Horizon
 redirects the user to 

Re: [openstack-dev] [Stable][Nova] VMware NSXv Support

2015-08-13 Thread John Garbutt
On Wednesday, August 12, 2015, Thierry Carrez thie...@openstack.org wrote:

 Gary Kotton wrote:
 
  On 8/12/15, 12:12 AM, Mike Perez thin...@gmail.com javascript:;
 wrote:
  On 15:39 Aug 11, Gary Kotton wrote:
  On 8/11/15, 6:09 PM, Jay Pipes jaypi...@gmail.com javascript:;
 wrote:
 
  Are you saying that *new functionality* was added to the stable/kilo
  branch of *Neutron*, and because new functionality was added to
  stable/kilo's Neutron, that stable/kilo *Nova* will no longer work?
 
  Yes. That is exactly what I am saying. The issues is as follows. The
  NSXv
  manager requires the virtual machines VNIC index to enable the security
  groups to work. Without that a VM will not be able to send and receive
  traffic. In addition to this the NSXv plugin does not have any agents
 so
  we need to do the metadata plugin changes to ensure metadata support.
 So
  effectively with the patches: https://review.openstack.org/209372 and
  https://review.openstack.org/209374 the stable/kilo nova code will not
  work with the stable/kilo neutron NSXv plugin.
  snip
 
  So what do you suggest?
 
  This was added in Neutron during Kilo [1].
 
  It's the responsibility of the patch owner to revert things if something
  doesn't land in a dependency patch of some other project.
 
  I'm not familiar with the patch, but you can see if Neutron folks will
  accept
  a revert in stable/kilo. There's no reason to get other projects
 involved
  because this wasn't handled properly.
 
  [1] - https://review.openstack.org/#/c/144278/
 
  So you are suggesting that we revert the neutron plugin? I do not think
  that a revert is relevant here.

 Yeah, I'm not sure reverting the Neutron patch would be more acceptable.
 That one landed in Neutron kilo in time.

 The issue here is that due to Nova's review velocity during the kilo
 cycle (and arguably the failure to raise this as a cross-project issue
 affecting the release), the VMware NSXv support was shipped as broken in
 Kilo, and requires non-trivial changes to get fixed.


I see this as Nova not shipping with VMware NSXv support in kilo, the
feature was never completed, rather than it being broken. I could be
missing something, but I also know that difference doesn't really help
anyone.


 We have two options: bending the stable rules to allow the fix to be
 backported, or document it as broken in Kilo with the invasive patches
 being made available for people and distributions who still want to
 apply it.

 Given that we are 4 months into Kilo, I'd say stable/kilo users are used
 to this being broken at this point, so my vote would go for the second
 option.


This would be backporting a new driver to an older release. That seems like
a bad idea.


 That said, we should definitely raise [1] as a cross-project issue and
 see how we could work it into Liberty, so that we don't end up in the
 same dark corner in 4 months. I just don't want to break the stable
 rules (and the user confidence we've built around us applying them) to
 retroactively pay back review velocity / trust issues within Nova.

 [1] https://review.openstack.org/#/c/165750/


So this is the same issue. The VMware neutron driver has merged support for
a feature where we have not managed to get into Nova yet.

First the long term view...

This is happening more frequently with Cinder drivers/features, Neutron
things, and to a lesser extent Glance.

The great work the Cinder folks have done with brick, is hopefully going to
improve the situation for Cinder. There are a group of folks working on a
similar VIF focused library to help making it easier to add support for new
Neutron VIF drivers without needing to merge things in Nova.

Right now those above efforts are largely focused on libvirt, but using
oslo.vmware, or probably something else, I am sure we could evolve
something similar for VMware, but I haven't dug into that.

There are lots of coding efforts and process efforts to make the most of
our current review bandwidth and to expand that bandwidth, but I don't
think it's helpful to get into that here.

So, more short term and specific points...

This patch had no bug or blueprint attached. It eventually got noticed a
few weeks after the blueprint freeze. It's hard to track cross project
dependencies if we don't know they exist. None of the various escalation
paths raised this patch. None of those things are good, they happened,
things happen.

Now it's a priority call. We have already delayed several blueprints (20 or
30) to try and get as many bugs fixed on features that have already made it
into tree (we already have a backlog of well over 100 bug patches to
review) and keep the priorities moving forward (that are mostly to help
us go faster in the near future).

Right now my gut tells me, partly in fairness to all the other things we
have just not managed to get reviewed that did follow the process and met
all the deadlines but were also unable to get merged, we should wait until
Mitaka for this one, 

Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-08-13 Thread Gilles Dubreuil
Hi Matthew,

On 11/08/15 01:14, Rich Megginson wrote:
 On 08/10/2015 07:46 AM, Matthew Mosesohn wrote:
 Sorry to everyone for bringing up this old thread, but it seems we may
 need more openstackclient/keystone experts to settle this.

 I'm referring to the comments in https://review.openstack.org/#/c/207873/
 Specifically comments from Richard Megginson and Gilles Dubreuil
 indicating openstackclient behavior for v3 keystone API.


 A few items seem to be under dispute:
 1 - Keystone should be able to accept v3 requests at
 http://keystone-server:5000/http://keystone-server:5000/
 
 I don't think so.  Keystone requires the version suffix /v2.0 or /v3.
 

Yes, if the public endpoint is set without a version then the service
can deal with either version.

http://paste.openstack.org/raw/412819/

That is not true for the admin endpoint (authentication is already done,
the admin services deals only with tokens), at least for now, Keystone
devs are working on it.

 2 - openstackclient should be able to interpret v3 requests and append
 v3/ to OS_AUTH_URL=http://keystone-server.5000/ or rewrite the URL
 if it is set as
 OS_AUTH_URL=http://keystone-server.5000/http://keystone-server.5000/
 
 It does, if it can determine from the given authentication arguments if
 it can do v3 or v2.0.
 

It effectively does, again, assuming the path doesn't contain a version
number (x.x.x.x:5000)

 3 - All deployments require /etc/keystone/keystone.conf with a token
 (and not simply use openrc for creating additional endpoints, users,
 etc beyond keystone itself and an admin user)
 
 No.  What I said about this issue was Most people using
 puppet-keystone, and realizing Keystone resources on nodes that are not
 the Keystone node, put a /etc/keystone/keystone.conf on that node with
 the admin_token in it.
 
 That doesn't mean the deployment requires /etc/keystone/keystone.conf. 
 It should be possible to realize Keystone resources on non-Keystone
 nodes by using ENV or openrc or other means.
 

Agreed. Also keystone.conf is used only to bootstrap keystone
installation and create admin users, etc.



 I believe it should be possible to set v2.0 keystone OS_AUTH_URL in
 openrc and puppet-keystone + puppet-openstacklib should be able to
 make v3 requests sensibly by manipulating the URL.
 
 Yes.  Because for the puppet-keystone resource providers, they are coded
 to a specific version of the api, and therefore need to be able to
 set/override the OS_IDENTITY_API_VERSION and the version suffix in the URL.
 

No. To support V2 and V3, the OS_AUTH_URL must not contain any version
in order.

The less we deal with the version number the better!

 Additionally, creating endpoints/users/roles shouldbe possible via
 openrc alone.
 
 Yes.
 

Yes, the openrc variables are used, if not available then the service
token from the keystone.conf is used.

 It's not possible to write single node tests that can demonstrate this
 functionality, which is why it probably went undetected for so long.
 
 And since this is supported, we need tests for this.

I'm not sure what's the issue besides the fact keystone_puppet doesn't
generate a RC file once the admin user has been created. That is left to
be done by the composition layer. Although we might want to integrate that.

If that issue persists, assuming the AUTH_URL is free for a version
number and having an openrc in place, we're going to need a bug number
to track the investigation.


 If anyone can speak up on these items, it could help influence the
 outcome of this patch.

 Thank you for your time.

 Best Regards,
 Matthew Mosesohn


Thanks,
Gilles


 On Fri, Jul 31, 2015 at 6:32 PM, Rich Megginson rmegg...@redhat.com
 mailto:rmegg...@redhat.com wrote:

 On 07/31/2015 07:18 AM, Matthew Mosesohn wrote:

 Jesse, thanks for raising this. Like you, I should just track
 upstream
 and wait for full V3 support.

 I've taken the quickest approach and written fixes to
 puppet-openstacklib and puppet-keystone:
 https://review.openstack.org/#/c/207873/
 https://review.openstack.org/#/c/207890/

 and again to Fuel-Library:
 https://review.openstack.org/#/c/207548/1

 I greatly appreciate the quick support from the community to
 find an
 appropriate solution. Looks like I'm just using a weird edge case
 where we're creating users on a separate node from where
 keystone is
 installed and it never got thoroughly tested, but I'm happy to fix
 bugs where I can.


 Most puppet deployments either realize all keystone resources on
 the keystone node, or drop an /etc/keystone/keystone.conf with
 admin token onto non-keystone nodes where additional keystone
 resources need to be realized.



 -Matthew

 On Fri, Jul 31, 2015 at 3:54 PM, Jesse Pretorius
 jesse.pretor...@gmail.com mailto:jesse.pretor...@gmail.com
 wrote:

 With 

Re: [openstack-dev] ][third-party-ci]Running custom code before tests

2015-08-13 Thread Andrey Pavlov
HI,
this file has changed since yesterday,
New link is
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/devstack-gate.yaml#L2146
or you can find these lines by yourself -
  export DEVSTACK_LOCAL_CONFIG=CINDER_ISCSI_HELPER=lioadm
  export DEVSTACK_LOCAL_CONFIG+=$'\n'CINDER_LVM_TYPE=thin
I mean that you can try to change CINDER_ISCSI_HELPER in devstack.

On Thu, Aug 13, 2015 at 9:47 AM, Eduard Matei 
eduard.ma...@cloudfounders.com wrote:

 Hi,

 I think you pointed me to the wrong file, the devstack-gate yaml (and line
 2201 contains timestamps).
 I need an example of how to configure tempest to use my driver.

 I tried EXPORT in the jenkins job (before executing dsvm shell script) but
 looking at the tempest.txt (log) it shows that it still uses the defaults.
 How do i overwrite those defaults?

 Thanks,

 Eduard

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Possible issues cryptography 1.0 and os-client-config 1.6.2

2015-08-13 Thread Robert Collins
tl;dr - developers running devstack probably want to be using
USE_CONSTRAINTS=True, otherwise there may be a few rough hours today.
[http://lists.openstack.org/pipermail/openstack-dev/2015-July/068569.html]


Apparently some third party CI systems are seeing an issue with
cryptography 1.0 - https://review.openstack.org/#/c/212349/ - but I
haven't reproduced it in local experiments. We haven't seen it in the
gate yet (according to logstash.o.o). Thats likely due to it only
affecting devstack in that way, and constraints insulating us from it.

Hopefully the cryptography thing, whatever it is, won't affect unit
tests which are not yet using constraints (but thats being worked on
as fast as we can!)


There was a separate issue w/ os-client-config that blew up on 1.6.2,
but thats been reproduced and dealt with - though the commit hasn't
gotten through the gate yet, so its possible we'll be dealing with a
dual-defect problem.

That said, again, constraints correctly insulated devstack from the
1.6.2 release - we detected it when the update proposal failed, rather
than in the gate queue, so \o/.

AFAICT the os-client-config thing won't affect any unit tests.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum]password for registry v2

2015-08-13 Thread 王华
Hi all,

In order to add registry v2 to bay nodes[1], authentication information is
needed for the registry to upload and download files from swift. The swift
storage-driver in registry now needs the parameters as described in [2].
User password is needed. How can we get the password?

1. Let user pass password in baymodel-create.
2. Use user token to get password from keystone

Is it suitable to store user password in db?

It may be insecure to store password in db and expose it to user in a
config file even if the password is encrypted. Heat store user password in
db before, and now change to keystone trust[3]. But if we use keystone
trust, the swift storage-driver does not support it. If we use trust, we
expose magnum user's credential in a config file, which is also insecure.

Is there a secure way to implement this bp?

[1] https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master
[2]
https://github.com/docker/distribution/blob/master/docs/storage-drivers/swift.md
[3] https://wiki.openstack.org/wiki/Keystone/Trusts

Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] [API]: API to get the list of router ports

2015-08-13 Thread Padmanabhan Krishnan
Hello,Is there a Neutron public API to get the list of router ports? Something 
similar to what the command neutron router-port-list {tenant} gives.I wasn't 
able to find one in the Neutron API doc as well as in 
neutronclient/v2_0/client.py.
I think with a combination of subnet_show, port_list, one can find the list of 
neutron router ports, but just wanted to see if there's an API already 
available.
Thanks,Paddu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stable][Nova] VMware NSXv Support

2015-08-13 Thread Salvatore Orlando
On 13 August 2015 at 09:50, John Garbutt j...@johngarbutt.com wrote:

 On Wednesday, August 12, 2015, Thierry Carrez thie...@openstack.org
 wrote:

 Gary Kotton wrote:
 
  On 8/12/15, 12:12 AM, Mike Perez thin...@gmail.com wrote:
  On 15:39 Aug 11, Gary Kotton wrote:
  On 8/11/15, 6:09 PM, Jay Pipes jaypi...@gmail.com wrote:
 
  Are you saying that *new functionality* was added to the stable/kilo
  branch of *Neutron*, and because new functionality was added to
  stable/kilo's Neutron, that stable/kilo *Nova* will no longer work?
 
  Yes. That is exactly what I am saying. The issues is as follows. The
  NSXv
  manager requires the virtual machines VNIC index to enable the
 security
  groups to work. Without that a VM will not be able to send and receive
  traffic. In addition to this the NSXv plugin does not have any agents
 so
  we need to do the metadata plugin changes to ensure metadata support.
 So
  effectively with the patches: https://review.openstack.org/209372 and
  https://review.openstack.org/209374 the stable/kilo nova code will
 not
  work with the stable/kilo neutron NSXv plugin.
  snip
 
  So what do you suggest?
 
  This was added in Neutron during Kilo [1].
 
  It's the responsibility of the patch owner to revert things if
 something
  doesn't land in a dependency patch of some other project.
 
  I'm not familiar with the patch, but you can see if Neutron folks will
  accept
  a revert in stable/kilo. There's no reason to get other projects
 involved
  because this wasn't handled properly.
 
  [1] - https://review.openstack.org/#/c/144278/
 
  So you are suggesting that we revert the neutron plugin? I do not think
  that a revert is relevant here.

 Yeah, I'm not sure reverting the Neutron patch would be more acceptable.
 That one landed in Neutron kilo in time.

 The issue here is that due to Nova's review velocity during the kilo
 cycle (and arguably the failure to raise this as a cross-project issue
 affecting the release), the VMware NSXv support was shipped as broken in
 Kilo, and requires non-trivial changes to get fixed.


 I see this as Nova not shipping with VMware NSXv support in kilo, the
 feature was never completed, rather than it being broken. I could be
 missing something, but I also know that difference doesn't really help
 anyone.


 We have two options: bending the stable rules to allow the fix to be
 backported, or document it as broken in Kilo with the invasive patches
 being made available for people and distributions who still want to
 apply it.

 Given that we are 4 months into Kilo, I'd say stable/kilo users are used
 to this being broken at this point, so my vote would go for the second
 option.


 This would be backporting a new driver to an older release. That seems
 like a bad idea.


 That said, we should definitely raise [1] as a cross-project issue and
 see how we could work it into Liberty, so that we don't end up in the
 same dark corner in 4 months. I just don't want to break the stable
 rules (and the user confidence we've built around us applying them) to
 retroactively pay back review velocity / trust issues within Nova.

 [1] https://review.openstack.org/#/c/165750/


 So this is the same issue. The VMware neutron driver has merged support
 for a feature where we have not managed to get into Nova yet.

 First the long term view...

 This is happening more frequently with Cinder drivers/features, Neutron
 things, and to a lesser extent Glance.

 The great work the Cinder folks have done with brick, is hopefully going
 to improve the situation for Cinder. There are a group of folks working on
 a similar VIF focused library to help making it easier to add support for
 new Neutron VIF drivers without needing to merge things in Nova.

 Right now those above efforts are largely focused on libvirt, but using
 oslo.vmware, or probably something else, I am sure we could evolve
 something similar for VMware, but I haven't dug into that.


That is definetely the way to go in my opinion. I reckon VIF plugging is an
area where there is a lot of coupling with Neutron, and decentralizing
will be definetely beneficial for both contributors and reviewers. It
should be ok to have a VMware-specific VIF library - it would not work
really like cinderbrick, but from the nova perspective I think this does
not matter.



 There are lots of coding efforts and process efforts to make the most of
 our current review bandwidth and to expand that bandwidth, but I don't
 think it's helpful to get into that here.

 So, more short term and specific points...

 This patch had no bug or blueprint attached. It eventually got noticed a
 few weeks after the blueprint freeze. It's hard to track cross project
 dependencies if we don't know they exist. None of the various escalation
 paths raised this patch. None of those things are good, they happened,
 things happen.


The blueprint was indeed attached to the commit message only on the last
patchset. This has been handled poorly by the 

[openstack-dev] [Neutron][Kuryr] - Update Status

2015-08-13 Thread Gal Sagie
Hello everyone,

I would like to give a short status update on Kuryr [1].

The project is starting to formalize, we already conducted two IRC meetings
[2]
to define the project first goals and road map goals, check the meetings
logs and
the agenda here [3]

I think we see good amount of interest from the community in the project
and understanding
the importance of its goals.
The project repository already contains the proxy implementation of
libnetwork remote driver API
which is mapped to Neutron's API's and you can check and review the code [4]

The current topics we are discussing are: (please view the etherpads for
more information)

1) Kuryr Configuration - both the Neutron side and Docker side [5]

2) Generic VIF-Binding solution that can be used by all Neutron plugins [6]

We are trying to cooperate and leverage the tremendous great work done
already in Magnum
and Kolla projects and see where Kuryr and Neutron fits together with these
projects.
We have Daneyon Hansen joining our meetings and we hope to keep the
cooperation and
introduce a solution which leverage the experience and work done in Neutron
and its
implementations.

I want to welcome anyone that is interested in this topic to come to the
meetings, raise
ideas/comments in the etherpads, review the code and contribute code, we
welcome any contribution.

Would like to thank Antoni Segura Puimedo (apuimedo) for leading this
effort and everyone that
are contributing to the project.

[1] http://eavesdrop.openstack.org/#Kuryr_Project_Meeting
[2] https://launchpad.net/kuryr
[3] https://wiki.openstack.org/wiki/Meetings/Kuryr
[4] https://review.openstack.org/#/q/project:openstack/kuryr,n,z
[5] https://etherpad.openstack.org/p/kuryr-configuration
[6] https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Can we add testcase for download image v2?

2015-08-13 Thread Ken'ichi Ohmichi

Hi Deore,

2015-08-13 15:33 GMT+09:00 Deore, Pranali11 pranali11.de...@nttdata.com:
 Hi,



 While going through the tempest code, I have found that image-download v2
 api test is missing in tempest.

 Can I add the api test for the same? Please suggest?

I guess

https://github.com/openstack/tempest/blob/master/tempest/api/image/v2/test_images.py#L70

is testing this API.
or image-download v2 api means the other API?

 Also glance task import API related testcases are also not there in tempest.

Yeah, you seems right.
Current tempest doesn't contain the test cases of glance's tasks APIs:

http://developer.openstack.org/api-ref-image-v2.html#os-tasks-v2

 Is it ok if I add tests for the same?

Yes, please :)

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-08-13 Thread Daniel P. Berrange
On Wed, Aug 12, 2015 at 07:20:24PM +0200, Markus Zoeller wrote:
 Another thing which makes it hard to understand the impact of the config
 options is, that it's not clear how the interdependency to other config 
 options is. As an example, the serial_console.base_url has a 
 dependency to DEFAULT.cert and DEFAULT.key if you want to use 
 secured websockets (base_url=wss://...). Another one is the option
 serial_console.serialproxy_port. This port number must be the same
 as it is in serial_console.base_url. I couldn't find an explanation to
 this.
 
 The three questions I have with every config option:
 1) which service(s) access this option?
 2) what does it do? / what's the impact? 
 3) which other options do I need to tweek to get the described impact?
 
 Would it make sense to stage the changes?
 M cycle: move the config options out of the modules to another place
  (like the approach Sean proposed) and annotate them with
  the services which uses them
 N cycle: inject the options into the drivers and eliminate the global
  variables this way (like Daniel et al. proposed)

The problem I see is that as long as we're using config options as
global variables, figuring out which services use which options is
a major non-trivial effort. Some may be easy to figure out, but
with many it gets into quite call path analysis, and the usage is
changing under your feet as new reviews are posted. So personally
I think it would be more practical todo the reverse. ie stop using
the config options as global variables, and then split up the
config file so that we have a separate one for each service.

ie a /etc/nova/nova-compute.conf and get rid of /etc/nova/nova.conf

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Liberty SPFE Request - IDP Specific WebSSO

2015-08-13 Thread David Chadwick
I would also like a spec proposal freeze exception, but not if this
leads to a rushed design and a poor implementation that will need to be
fixed again during the next cycle. Its far better to get the right
design now, even if it means missing the liberty release, than to
implement a suboptimal design just in order to make the liberty release.
We have too many examples of half implemented federation features being
rushed through into a release, which then cause more effort to fix in
the next release (and churn for implementors).

David

On 13/08/2015 00:20, Lance Bragstad wrote:
 Hey all, 
 
 
 I'd like to propose a spec proposal freeze exception for IDP Specific
 WebSSO [0].
 
 This topic has been discussed, in length, on the mailing list [1], where
 this spec has been referenced as a possible solution [2]. This would
 allow for multiple Identity Providers to use the same protocol. As
 described on the mailing list, this proposal would help with the public
 cloud cases for federated authentication workflows, where Identity
 Providers can't be directly exposed to users. 
 
 The flow would look similar to what we already do for federated
 authentication [3], but it includes adding a call in step 3. Most of the
 code for step 3 already exists in Keystone, it would more or less be
 adding it to the path.
 
 
 Thanks!
 
 
 [0] https://review.openstack.org/#/c/199339/2
 [1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071131.html
 [2] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071571.html
 [3] http://goo.gl/lLbvE1
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-13 Thread David Chadwick


On 13/08/2015 02:22, Jamie Lennox wrote:
 
 
 - Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk To:
 openstack-dev@lists.openstack.org Sent: Thursday, 13 August, 2015
 7:46:54 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
 Federated Login
 
 Hi Jamie
 
 I have been thinking some more about your Coke and Pepsi use case 
 example, and I think it is a somewhat spurious example, for the 
 following reasons:
 
 1. If Coke and Pepsi are members of the SAME federation, then they
 trust each other (by definition). Therefore they would not and
 could not object to being listed as alternative IdPs in this
 federation.
 
 2. If Coke and Pepsi are in different federations because they
 dont trust each other, but they have the same service provider,
 then their service provider would be a member of both federations.
 In this case, the SP would provide different access points to the
 different federations, and neither Coke nor Pepsi would be aware of
 each other.
 
 regards
 
 David
 
 So yes, my point here is to number 2 and providing multitenancy in a
 way that you can't see who else is available, and in talking with
 some of the keystone people this is essentially what we've come to
 (and i think i mentioned earlier?) that you would need to provide a
 different access point to different companies

not to the different companies, but to the different federations. This
is fundamentally different.
However, an SP within a federation may have a private contract with one
organisation and provide a separate access point to it (which may have
several IdPs associated with it).
So I think that Keystone needs a way of indicating which groups of IdPs
have similar relationships to the SP and need to be grouped together for
display purposes.
This brings me back to another related email I sent out. OpenStack needs
a general way of applying access controls to list (tables) of entities.
This would solve the current and other similar problems in a common way.

 to keep this
 information private. It has the side advantage for the public cloud
 folks of providing whitelabelling for horizon.
 
 The question then once you have multiple access points per customer
 (not user) is how to list IDPs that are associated with that
 customer. The example i had earlier was tagging so you could tag a
 horizon instance (probably doesn't need to be a whole instance, just
 a login page) with like a value like COKE and when you list IDPs from
 keystone you say list with tag=COKE to find out what should show in
 horizon. This would allow common idps like google to be reused.

I think it is an authorisation issue, and tagging is no different to
applying roles (except its less secure). If you have the right role, you
get access to the list entry, otherwise you do not. This is secure.
Tagging is not. It effectively allows anyone to claim any role they wish
by saying I want tag Z.

 
 This is why i was saying that public/private may not be fine grained
 enough.

Agreed. I is effectively a single role based system.

 It may also be not be a realistic concern. If we are talking
 a portal per customer does the cost of rebooting horizon to staticly
 add a new idp to the local_config matter? This is assumedly a rare
 operation.
 
 I think the answer has been for a while that idp listing is going to
 need to be configurable from horizon because we already have a case
 for list nothing, list everything, and use this static list, so if in
 future we find we need to add something more complex like tagging
 it's another option we can consider then.

I dont think this is the correct approach. It is allowing the user (in
this case Horizon) to apply his own access controls.

regards

David
 
 
 On 06/08/2015 00:54, Jamie Lennox wrote:
 
 
 - Original Message -
 From: David Lyle dkly...@gmail.com To: OpenStack
 Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org Sent: Thursday, August 6,
 2015 5:52:40 AM Subject: Re: [openstack-dev] [Keystone]
 [Horizon] Federated Login
 
 Forcing Horizon to duplicate Keystone settings just makes
 everything much harder to configure and much more fragile.
 Exposing whitelisted, or all, IdPs makes much more sense.
 
 On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews 
 dolph.math...@gmail.com  wrote:
 
 
 
 On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli 
 steve...@ca.ibm.com  wrote:
 
 
 
 
 
 Some folks said that they'd prefer not to list all associated
 idps, which i can understand. Why?
 
 So the case i heard and i think is fairly reasonable is providing
 corporate logins to a public cloud. Taking the canonical
 coke/pepsi example if i'm coke, i get asked to login to this
 public cloud i then have to scroll though all the providers to
 find the COKE.COM domain and i can see for example that PEPSI.COM
 is also providing logins to this cloud. Ignoring the corporate
 privacy implications this list has the potential to get long.
 Think about for example how you can 

Re: [openstack-dev] [magnum]problems for horizontal scale

2015-08-13 Thread Kai Qiang Wu
hi Hua,

My comments in blue below. please check.

Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   王华 wanghua.hum...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   08/13/2015 03:32 PM
Subject:Re: [openstack-dev] [magnum]problems for horizontal scale



Hi Kai Qiang Wu,

I have some comments in line.

On Thu, Aug 13, 2015 at 1:32 PM, Kai Qiang Wu wk...@cn.ibm.com wrote:
  Hi Hua,

  I have some comments about this:

  A
   remove heat poller can be a way, but some of its logic needs to make
  sure it work and performance not burden.
  1) for old heat poller it is quick loop, with fixed interval, to make
  sure stack status update quickly can be reflected in bay status
  2) for periodic task running, it seems dynamic loop, and period is long,
  it was added for some stacks creation timeout, 1) loop exit, this 2) loop
  can help update the stack and also conductor crash issue


 It is not necessary to remove heat poller, so we can keep it.



  It would be ideal to put in one place for looping over the stacks, but
  periodic tasks need to consider if it really just need to loop
  IN_PROGRESS status stack ? And what's the interval for loop that ? (60s
  or short, loop performance)


It is necessary to loop IN_PROGRESS status stack for conductor crash issue.


  Does heat have other status transition  path, like delete_failed --
  (status reset) -- become OK.  etc.


 It needs to be made sure.




  B For remove db operation in bay_update case. I did not understand your
  suggestion.
  bay_update include update_stack and poll_and_check(it is in heat poller),
  if you removed heat poller to periodic task(as you said in your 3). It
  still needs db operations.



Race conditions occur in periodic tasks too.  If we save the stack params
such as node_count in bay_update and race condition occurs, then the
node_count in db is wrong and the status is UPDATE_COMPLETE. And there is
no way to correct it.
If we save stack params in periodic tasks and race condition occurs, the
node_count in db is still wrong and status is UPDATE_COMPLETE. We can
correct it in the next periodic task if race condition does not occur. The
solution I proposed can not promise the data in db is always right.

Yes, it can help some, when you talked periodic task,  I checked that,

  filters = [bay_status.CREATE_IN_PROGRESS,
   bay_status.UPDATE_IN_PROGRESS,
   bay_status.DELETE_IN_PROGRESS]
bays = objects.Bay.list_all(ctx, filters=filters)
   If UPDATE_COMPLETE, I did not find it would sync it in this task. Do you
mean add that status check in this periodic task ?



C For allow admin user to show stacks in other tenant, it seems OK. Does
other projects try this before? Is it reasonable case for customer ?

Nova allow admin user to show instances in other tenant. Neutron allow
admin user to show ports in other tenant, nova uses it to sync up network
info for instance from neutron.
   That would be OK, I think

Thanks


Best Wishes,


Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
        No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193


Follow your heart. You are miracle!

Inactive hide details for 王华 ---08/13/2015 11:31:53 AM---any comments on
this? On Wed, Aug 12, 2015 at 2:50 PM, 王华 wan王华 ---08/13/2015 11:31:53
AM---any comments on this? On Wed, Aug 12, 2015 at 2:50 PM, 王华 
wanghua.hum...@gmail.com wrote:

From: 王华 wanghua.hum...@gmail.com
To: openstack-dev@lists.openstack.org
Date: 08/13/2015 11:31 AM
Subject: Re: [openstack-dev] [magnum]problems for horizontal scale




  any comments on this?

  On Wed, Aug 12, 2015 at 2:50 PM, 王华 wanghua.hum...@gmail.com wrote:
Hi All,

In order to prevent race conditions due to multiple conductors, my
solution is as blew:
1. remove the db operation in bay_update to prevent race
conditions.Stack operation is atomic. Db operation is atomic. But
the two operations together are not atomic.So the data in the db
may be wrong.
2. sync up stack status and stack parameters(now only node_count)
from heat by periodic tasks. 

Re: [openstack-dev] [Magnum] Obtain the objects from the bay endpoint

2015-08-13 Thread Kai Qiang Wu
Hi Stdake and Vilobh,

If I get what you proposed below, you means pod/rc/service would not be
stored in magnum side, Just retrieved and updated in k8s side ?

For now, if magnum not add any specific logic to pod/rc/service, that can
be OK.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steven Dake (stdake) std...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   08/12/2015 11:52 PM
Subject:Re: [openstack-dev] [Magnum] Obtain the objects from the bay
endpoint





From: Akash Gangil akashg1...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Wednesday, August 12, 2015 at 1:37 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Obtain the objects from the bay
endpoint

  Hi,

  I have a few questions. inline.


Problem :-

Currently objects (pod/rc/service) are read from the database. In
order for native clients to work, they must be read from the ReST
bay endpoint. To execute native clients, we must have one truth of
the state of the system, not two as in its current state of art.


  What is meant by the native clients here? Can you give an example?

Native client is docker binary or kubectl from those various projects.  We
also need to support python-magnumclient operations to support further Heat
integration, which allows Magnum to be used well with proprietary software
implementations that may be doing orchestration via Heat.



A]  READ path needs to be changed :

1. For python clients :-


python-magnum client-rest api-conductor-rest-endpoint-k8s-api
handler


In its present state of art this is python-magnum client-rest
api-db


2. For native clients :-

native client-rest-endpoint-k8s-api



  If native client can get all the info through the rest-endpoint-k8s
  handler, why in case of magnum client do we need to go through
  rest-api- conductor? Do we parse or modify the k8s-api data before
  responding to the python-magnum client?



Kubernetes has a rest API endpoint running in the bay.  This is different
from the Magnum rest API.  This is what is referred to above.

B] WRITE operations need to happen via the rest endpoint instead of
the conductor.

  If we completely bypass the conductor, is there any way to keep a
  track of trace of how a resource was modified? Since I presume now
  magnum doesn't have that info, since we talk to k8s-api directly? Or
  is this irrelevant?
C] Another requirement that needs to be satisfied is that data
returned by magnum should be the same whether its created by native
client or python-magnum client.

  I don't understand why is the information duplicated in the magnum db
  and k8s data source in first place? From what I understand magnum has
  its own database which is with k8s-api responses?

The reason it is duplicated is because when I wrote the original code, I
didn’t forsee this objective.  Essentially I’m not perfect ;)

The fix will make sure all of the above conditions are met.


Need your input on the proposed approach.



ACK accurate of my understanding of the proposed approach :)
-Vilobh


[1] https://blueprints.launchpad.net/magnum/+spec/objects-from-bay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  Akash
  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [magnum]problems for horizontal scale

2015-08-13 Thread 王华
Hi Kai Qiang Wu,

I have some comments in line.

On Thu, Aug 13, 2015 at 1:32 PM, Kai Qiang Wu wk...@cn.ibm.com wrote:

 Hi Hua,

 I have some comments about this:

 A
  remove heat poller can be a way, but some of its logic needs to make sure
 it work and performance not burden.
 1) for old heat poller it is quick loop, with fixed interval, to make sure
 stack status update quickly can be reflected in bay status
 2) for periodic task running, it seems dynamic loop, and period is long,
 it was added for some stacks creation timeout, 1) loop exit, this 2) loop
 can help update the stack and also conductor crash issue

 It is not necessary to remove heat poller, so we can keep it.




 It would be ideal to put in one place for looping over the stacks, but
 periodic tasks need to consider if it really just need to loop
 IN_PROGRESS status stack ? And what's the interval for loop that ? (60s or
 short, loop performance)

It is necessary to loop IN_PROGRESS status stack for conductor crash issue.



 Does heat have other status transition  path, like delete_failed --
 (status reset) -- become OK.  etc.

 It needs to be made sure.





 B For remove db operation in bay_update case. I did not understand your
 suggestion.
 bay_update include update_stack and poll_and_check(it is in heat poller),
 if you removed heat poller to periodic task(as you said in your 3). It
 still needs db operations.

 Race conditions occur in periodic tasks too.  If we save the stack params
such as node_count in bay_update and race condition occurs, then the
node_count in db is wrong and the status is UPDATE_COMPLETE. And there is
no way to correct it.
If we save stack params in periodic tasks and race condition occurs, the
node_count in db is still wrong and status is UPDATE_COMPLETE. We can
correct it in the next periodic task if race condition does not occur. The
solution I proposed can not promise the data in db is always right.



 C For allow admin user to show stacks in other tenant, it seems OK. Does
 other projects try this before? Is it reasonable case for customer ?

 Nova allow admin user to show instances in other tenant. Neutron allow
 admin user to show ports in other tenant, nova uses it to sync up network
 info for instance from neutron.

 Thanks


 Best Wishes,

 
 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing

 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
 100193

 
 Follow your heart. You are miracle!

 [image: Inactive hide details for 王华 ---08/13/2015 11:31:53 AM---any
 comments on this? On Wed, Aug 12, 2015 at 2:50 PM, 王华 wan]王华
 ---08/13/2015 11:31:53 AM---any comments on this? On Wed, Aug 12, 2015 at
 2:50 PM, 王华 wanghua.hum...@gmail.com wrote:

 From: 王华 wanghua.hum...@gmail.com
 To: openstack-dev@lists.openstack.org
 Date: 08/13/2015 11:31 AM
 Subject: Re: [openstack-dev] [magnum]problems for horizontal scale
 --



 any comments on this?

 On Wed, Aug 12, 2015 at 2:50 PM, 王华 *wanghua.hum...@gmail.com*
 wanghua.hum...@gmail.com wrote:

Hi All,

In order to prevent race conditions due to multiple conductors, my
solution is as blew:
1. remove the db operation in bay_update to prevent race
conditions.Stack operation is atomic. Db operation is atomic. But the two
operations together are not atomic.So the data in the db may be wrong.
2. sync up stack status and stack parameters(now only node_count) from
heat by periodic tasks. bay_update can change stack parameters, so we need
to sync up them.
3. remove heat poller, because we have periodic tasks.

To sync up stack parameters from heat, we need to show stacks using
admin_context. But heat don't allow to show stacks in other tenant. If we
want to show stacks in other tenant, we need to store auth context for
every bay. That is a problem. Even if we store the auth context, there is a
timeout for token. The best way I think is to let heat allow admin user to
show stacks in other tenant.

Do you have a better solution or any improvement for my solution?

Regards,
Wanghua

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [magnum]problems for horizontal scale

2015-08-13 Thread 王华
Hi, Kai,
It is needed to sync stack in periodic tasks even if the bay status is
UPDATE_COMPLETE in my solution.

Thanks

Regards,
Wanghua

On Thu, Aug 13, 2015 at 3:42 PM, Kai Qiang Wu wk...@cn.ibm.com wrote:

 hi Hua,

 My comments in blue below. please check.

 Thanks


 Best Wishes,

 
 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing

 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
 100193

 
 Follow your heart. You are miracle!

 [image: Inactive hide details for 王华 ---08/13/2015 03:32:53 PM---Hi Kai
 Qiang Wu, I have some comments in line.]王华 ---08/13/2015 03:32:53 PM---Hi
 Kai Qiang Wu, I have some comments in line.

 From: 王华 wanghua.hum...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 08/13/2015 03:32 PM
 Subject: Re: [openstack-dev] [magnum]problems for horizontal scale
 --



 Hi Kai Qiang Wu,

 I have some comments in line.

 On Thu, Aug 13, 2015 at 1:32 PM, Kai Qiang Wu *wk...@cn.ibm.com*
 wk...@cn.ibm.com wrote:

Hi Hua,

I have some comments about this:

A
 remove heat poller can be a way, but some of its logic needs to make
sure it work and performance not burden.
1) for old heat poller it is quick loop, with fixed interval, to make
sure stack status update quickly can be reflected in bay status
2) for periodic task running, it seems dynamic loop, and period is
long, it was added for some stacks creation timeout, 1) loop exit, this 2)
loop can help update the stack and also conductor crash issue


  It is not necessary to remove heat poller, so we can keep it.




It would be ideal to put in one place for looping over the stacks, but
periodic tasks need to consider if it really just need to loop
IN_PROGRESS status stack ? And what's the interval for loop that ?
(60s or short, loop performance)


 It is necessary to loop IN_PROGRESS status stack for conductor crash issue.



Does heat have other status transition  path, like delete_failed --
(status reset) -- become OK.  etc.


  It needs to be made sure.





B For remove db operation in bay_update case. I did not understand
your suggestion.
bay_update include update_stack and poll_and_check(it is in heat
poller), if you removed heat poller to periodic task(as you said in your
3). It still needs db operations.


 Race conditions occur in periodic tasks too.  If we save the stack params
 such as node_count in bay_update and race condition occurs, then the
 node_count in db is wrong and the status is UPDATE_COMPLETE. And there is
 no way to correct it.
 If we save stack params in periodic tasks and race condition occurs, the
 node_count in db is still wrong and status is UPDATE_COMPLETE. We can
 correct it in the next periodic task if race condition does not occur. The
 solution I proposed can not promise the data in db is always right.


*  Yes, it can help some, when you talked periodic task,  I checked
that,*

*filters = [bay_status.CREATE_IN_PROGRESS,*

 *   bay_status.UPDATE_IN_PROGRESS,*
 *   bay_status.DELETE_IN_PROGRESS]*
 *bays = objects.Bay.list_all(ctx, filters=filters)*
  *  If UPDATE_COMPLETE, I did not find it would sync it in this task. Do
 you mean add that status check in this periodic task ?*



 C For allow admin user to show stacks in other tenant, it seems OK. Does
 other projects try this before? Is it reasonable case for customer ?

 Nova allow admin user to show instances in other tenant. Neutron allow
 admin user to show ports in other tenant, nova uses it to sync up network
 info for instance from neutron.
*That would be OK, I think*


 Thanks


 Best Wishes,

 
 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing

 E-mail: *wk...@cn.ibm.com* wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
 100193

 
 Follow your heart. You are miracle!

 [image: Inactive hide details for 王华 ---08/13/2015 11:31:53 AM---any
 comments on this? On Wed, Aug 12, 2015 at 2:50 PM, 王华 wan]王华
 ---08/13/2015 11:31:53 AM---any comments on this? On Wed, Aug 12, 2015 at
 2:50 PM, 王华 *wanghua.hum...@gmail.com* wanghua.hum...@gmail.com wrote:

 From: 王华 *wanghua.hum...@gmail.com* wanghua.hum...@gmail.com
 To: *openstack-dev@lists.openstack.org*
 

Re: [openstack-dev] [trove]Implement the API to createmasterinstance and slave instances with one request

2015-08-13 Thread 陈迪豪
We have read the code of replica_count and it's like what I thought.


We have an suggestion to extend this feature. When users set slave_of_id and 
replica_count at the same time, we just create replica instances. If they use 
replica_count without using slave_of_id, we should create an master 
instance for them and some replica instances of it.


For example, trove create $name --replica-of $id --replica_count=2 will 
create 2 replica instances. And trove create $name --replica_count=2 will 
create 1 master instance and 2 replica instances.


What do you think Doug?


Regards,
tobe from UnitedStack


-- Original --
From:  陈迪豪chendi...@unitedstack.com;
Date:  Thu, Aug 13, 2015 12:25 PM
To:  openstack-dev@lists.openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [trove]Implement the API to createmasterinstance 
and slave instances with one request

 
Thanks Doug.
 
It's really helpful and we need this feature as well. Can you point out the bp 
or patch of this?


I think we will add --replica-count parameter within trove create request. So 
trove-api will create trove instance(aync create nova instance) and then create 
some replica trove instances(aync create nova instances). This is really useful 
for web front-end developers to create master and replica instances in the same 
time(they don't want to send multiple requests by themselves).


Regards,
tobe from UnitedStack 


-- Original --
From:  Doug Shelleyd...@tesora.com;
Date:  Wed, Aug 12, 2015 10:21 PM
To:  openstack-dev@lists.openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [trove]Implement the API to create masterinstance 
and slave instances with one request

 
   As of Kilo, you can add a —replica-count parameter to trove create 
—replica-of to have it spin up multiple mysql slaves simultaneously. This same 
construct is in the python/REST API as well. I realize that you still need to 
create a master first, but thought  I would point this out as it might be 
helpful to you.
 
 
 
 
 Regards,
 Doug
 
 
 
 
   From: 陈迪豪 chendi...@unitedstack.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Tuesday, August 11, 2015 at 11:45 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [trove]Implement the API to create master instance 
and slave instances with one request
 
 
 
   Now we can create mysql master instance and slave instance one by one.
 
 
 It would be much better to allow user to create one master instance and 
multiple slave instances with one request.
 
 
 Any suggestion about this, the API design or the implementation?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Consistent functional test failures (seems infra not have enough resource)

2015-08-13 Thread Kai Qiang Wu
Hi Tom,


I did talked to infra, which I think it is resource issue, But they thought
it is nova issue,


For we boot k8s bay, we use baymodel with falvor m1.small, you can find
devstack



+-+---+---+--+---+--+---+-+---+
| ID  | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs |
RXTX_Factor | Is_Public |
+-+---+---+--+---+--+---+-+---+
| 1   | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0
| True  |
| 2   | m1.small  | 2048  | 20   | 0 |  | 1 | 1.0
| True  |
| 3   | m1.medium | 4096  | 40   | 0 |  | 2 | 1.0
| True  |
| 4   | m1.large  | 8192  | 80   | 0 |  | 4 | 1.0
| True  |
| 42  | m1.nano   | 64| 0| 0 |  | 1 | 1.0
| True  |
| 451 | m1.heat   | 512   | 0| 0 |  | 1 | 1.0
| True  |
| 5   | m1.xlarge | 16384 | 160  | 0 |  | 8 | 1.0
| True  |
| 84  | m1.micro  | 128   | 0| 0 |  | 1 | 1.0
| True  |
+-+---+---+--+---+--+---+-+---+



From logs below:

[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin]
(devstack-trusty-rax-dfw-4299602, devstack-trusty-rax-dfw-4299602)
ram:5172 disk:17408 io_ops:0 instances:1 does not have 20480 MB usable
disk, it only has 17408.0 MB usable disk. host_passes
/opt/stack/new/nova/nova/scheduler/filters/disk_filter.py:60
2015-08-13 08:26:15.218 INFO nova.filters
[req-e

It is 20GB disk space, so failed for that.


I think it is related with this, the jenkins allocated VM disk space is not
large.
I am curious why it failed so often recently.  Does os-infra changed
something ?




Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Tom Cammann tom.camm...@hp.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   08/13/2015 06:24 PM
Subject:[openstack-dev] [Magnum] Consistent functional test failures



Hi Team,

Wanted to let you know why we are having consistent functional test
failures in the gate.

This is being caused by Nova returning No valid host to heat:

2015-08-13 08:26:16.303 31543 INFO heat.engine.resource [-] CREATE:
Server kube_minion [12ab45ef-0177-4118-9ba0-3fffbc3c1d1a] Stack
testbay-y366b2atg6mm-kube_minions-cdlfyvhaximr-0-dufsjliqfoet
[b40f0c9f-cb54-4d75-86c3-8a9f347a27a6]
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource Traceback (most
recent call last):
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/resource.py, line 625, in
_action_recorder
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/resource.py, line 696, in _do_action
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield
self.action_handler_task(action, args=handler_args)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/scheduler.py, line 320, in wrapper
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource step =
next(subtask)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/resource.py, line 670, in
action_handler_task
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource while not
check(handler_data):
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/resources/openstack/nova/server.py,
line 759, in check_create_complete
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource return
self.client_plugin()._check_active(server_id)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/clients/os/nova.py, line 232, in
_check_active
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource 'code':
fault.get('code', _('Unknown'))
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource
ResourceInError: Went to status ERROR due to Message: No valid host was
found. There are not enough hosts available., Code: 500

And this in turn is being caused by the compute instance running out of
disk space:

2015-08-13 08:26:15.216 DEBUG nova.filters
[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Starting with 1
host(s) get_filtered_objects /opt/stack/new/nova/nova/filters.py:70
2015-08-13 08:26:15.217 DEBUG nova.filters

[openstack-dev] [Magnum] Consistent functional test failures

2015-08-13 Thread Tom Cammann

Hi Team,

Wanted to let you know why we are having consistent functional test 
failures in the gate.


This is being caused by Nova returning No valid host to heat:

2015-08-13 08:26:16.303 31543 INFO heat.engine.resource [-] CREATE: 
Server kube_minion [12ab45ef-0177-4118-9ba0-3fffbc3c1d1a] Stack 
testbay-y366b2atg6mm-kube_minions-cdlfyvhaximr-0-dufsjliqfoet 
[b40f0c9f-cb54-4d75-86c3-8a9f347a27a6]
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource Traceback (most 
recent call last):
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
/opt/stack/new/heat/heat/engine/resource.py, line 625, in _action_recorder

2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
/opt/stack/new/heat/heat/engine/resource.py, line 696, in _do_action
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield 
self.action_handler_task(action, args=handler_args)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
/opt/stack/new/heat/heat/engine/scheduler.py, line 320, in wrapper
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource step = 
next(subtask)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
/opt/stack/new/heat/heat/engine/resource.py, line 670, in 
action_handler_task
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource while not 
check(handler_data):
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
/opt/stack/new/heat/heat/engine/resources/openstack/nova/server.py, 
line 759, in check_create_complete
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource return 
self.client_plugin()._check_active(server_id)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
/opt/stack/new/heat/heat/engine/clients/os/nova.py, line 232, in 
_check_active
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource 'code': 
fault.get('code', _('Unknown'))
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource 
ResourceInError: Went to status ERROR due to Message: No valid host was 
found. There are not enough hosts available., Code: 500


And this in turn is being caused by the compute instance running out of 
disk space:


2015-08-13 08:26:15.216 DEBUG nova.filters 
[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Starting with 1 
host(s) get_filtered_objects /opt/stack/new/nova/nova/filters.py:70
2015-08-13 08:26:15.217 DEBUG nova.filters 
[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter 
RetryFilter returned 1 host(s) get_filtered_objects 
/opt/stack/new/nova/nova/filters.py:84
2015-08-13 08:26:15.217 DEBUG nova.filters 
[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter 
AvailabilityZoneFilter returned 1 host(s) get_filtered_objects 
/opt/stack/new/nova/nova/filters.py:84
2015-08-13 08:26:15.217 DEBUG nova.filters 
[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter RamFilter 
returned 1 host(s) get_filtered_objects 
/opt/stack/new/nova/nova/filters.py:84
2015-08-13 08:26:15.218 DEBUG nova.scheduler.filters.disk_filter 
[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] 
(devstack-trusty-rax-dfw-4299602, devstack-trusty-rax-dfw-4299602) 
ram:5172 disk:17408 io_ops:0 instances:1 does not have 20480 MB usable 
disk, it only has 17408.0 MB usable disk. host_passes 
/opt/stack/new/nova/nova/scheduler/filters/disk_filter.py:60
2015-08-13 08:26:15.218 INFO nova.filters 
[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter DiskFilter 
returned 0 hosts


For now a recheck seems to work about 1 in 2, so we can still land patches.

The fix for this could be to clean up our Magnum devstack install more 
aggressively, which might be as simple as cleaning up the images we use, 
or get infra to provide our tests with a larger disk size. I will 
probably test out a patch today which cleans up the images we use in 
devstack to see if that helps.


If anyone can help progress this let me know.

Cheers,
Tom



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-13 Thread Kuvaja, Erno


 -Original Message-
 From: Mike Perez [mailto:thin...@gmail.com]
 Sent: Wednesday, August 12, 2015 4:45 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and
 glance
 
 On Wed, Aug 12, 2015 at 2:23 AM, Kuvaja, Erno kuv...@hp.com wrote:
  -Original Message-
  From: Mike Perez [mailto:thin...@gmail.com]
  Sent: 11 August 2015 19:04
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store
  and glance
 
  On 15:06 Aug 11, Kuvaja, Erno wrote:
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
 
  snip
 
Having the image cache local to the compute nodes themselves
gives the best performance overall, and with glance_store, means
that glance-api isn't needed at all, and Glance can become just a
metadata repository, which would be awesome, IMHO.
  
   You have any figures to back this up in scale? We've heard similar
   claims for quite a while and as soon as people starts to actually
   look into how the environments behaves, they quite quickly turn
   back. As you're not the first one, I'd like to make the same
   request as to everyone before, show your data to back this claim
   up! Until that it is just like you say it is, opinion.;)
 
  The claims I make with Cinder doing caching on its own versus just
  using Glance with rally with an 8G image:
 
  Creating/deleting 50 volumes w/ Cinder image cache: 324 seconds
  Creating/delete 50 volumes w/o Cinder image cache: 3952 seconds
 
  http://thing.ee/x/cache_results/
 
  Thanks to Patrick East for pulling these results together.
 
  Keep in mind, this is using a block storage backend that is
  completely separate from the OpenStack nodes. It's *not* using a
  local LVM all in one OpenStack contraption. This is important because
  even if you have Glance caching enabled, and there was no cache miss,
  you still have to dd the bits to the block device, which is still
  going over the network. Unless Glance is going to cache on the storage
 array itself, forget about it.
 
  Glance should be focusing on other issues, rather than trying to make
  copying image bits over the network and dd'ing to a block device faster.
 
  --
  Mike Perez
 
  Thanks Mike,
 
  So without cinder cache your times averaged roughly 150+second marks.
  The couple of first volumes with the cache took roughly 170+seconds.
  What the data does not tell, was cinder pulling the images directly
  from glance backend rather than through glance on either of these cases?
 
 Oh but I did, and that's the beauty of this, the files marked cinder-cache-
 x.html are avoiding Glance as soon as it can, using the Cinder generic image
 cache solution [1]. Please reread my when I say Glance is unable to do
 caching in a storage array, so we don't rely on Glance. It's too slow 
 otherwise.
 
 Take this example with 50 volumes created from image with Cinder's image
 cache
 [2]:
 
 * Is using Glance cache (oh no cache miss)
 * Downloads the image from whatever glance store
 * dd's the bits to the exported block device.
 * the bits travel to the storage array that the block device was exported
 from.
 * [2nd-50th] request of that same image comes, Cinder instead just
 references
   a cinder:// endpoint which has the storage array do a copy on write. ZERO
   COPYING since we can clone the image. Just a reference pointer and done,
 move
   on.
 
  Somehow you need to seed those caches and that seeding
 time/mechanism
  is where the debate seems to be. Can you afford keeping every image in
  cache so that they are all local or if you need to pull the image to
  seed your cache how much you will benefit that your 100 cinder nodes
  are pulling it directly from backend X versus glance caching/sitting
  in between. How block storage backend handles that 100 concurrent
  reads by different client when you are seeding it between different
  arrays? The scale starts matter here because it makes a lot of
  difference on backend if it's couple of cinder or nova nodes
  requesting the image vs. 100s of them. Lots of backends tends to not
  like such loads or we outperform them due not having to fight for the
 bandwidth with other consumers of that backend.
 
 Are you seriously asking if a backend is going to be with stand concurrent
 reads compared to Glance cache?
 
 All storage backends do is I/O, unlike Glance which is trying to do a million
 things and just pissing off the community.

Thanks, I'm so happy to hear that it's not just couple of us who thinks that 
the project is lacking focus.
 
 They do it pretty darn well and are a lot more sophisticated than Glance
 cache.
 I'd pick Ceph w/ Cinder generic image cache doing copy on writes over
 Glance cache any day.
 
 As it stands Cinder will be recommending in documentation for users to use
 the generic image cache solution over Glance Cache.
 


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-08-13 Thread Sean Dague
On 08/13/2015 05:02 AM, Daniel P. Berrange wrote:
 On Wed, Aug 12, 2015 at 07:20:24PM +0200, Markus Zoeller wrote:
 Another thing which makes it hard to understand the impact of the config
 options is, that it's not clear how the interdependency to other config 
 options is. As an example, the serial_console.base_url has a 
 dependency to DEFAULT.cert and DEFAULT.key if you want to use 
 secured websockets (base_url=wss://...). Another one is the option
 serial_console.serialproxy_port. This port number must be the same
 as it is in serial_console.base_url. I couldn't find an explanation to
 this.

 The three questions I have with every config option:
 1) which service(s) access this option?
 2) what does it do? / what's the impact? 
 3) which other options do I need to tweek to get the described impact?

 Would it make sense to stage the changes?
 M cycle: move the config options out of the modules to another place
  (like the approach Sean proposed) and annotate them with
  the services which uses them
 N cycle: inject the options into the drivers and eliminate the global
  variables this way (like Daniel et al. proposed)
 
 The problem I see is that as long as we're using config options as
 global variables, figuring out which services use which options is
 a major non-trivial effort. Some may be easy to figure out, but
 with many it gets into quite call path analysis, and the usage is
 changing under your feet as new reviews are posted. So personally
 I think it would be more practical todo the reverse. ie stop using
 the config options as global variables, and then split up the
 config file so that we have a separate one for each service.
 
 ie a /etc/nova/nova-compute.conf and get rid of /etc/nova/nova.conf

Options shouldn't be popping back and forth between services that often.
If they are, we're doing something else wrong. I do agree that it's a
big effort to start working through this. But we have some volunteers
and will on it. And in collapsing these options into a smaller number of
places we're going to be touching most of them and getting to ask real
questions like why is this even a thing?.

Because, right now, I don't think anyone has a good handle on our
configuration space. Providing that global view through such a
reorganization will help us figure out next steps here.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Adding screenshot when making UI changes

2015-08-13 Thread ELISHA, Moshe (Moshe)
Hey,

Following our discussion in the IRC, when pushing UI changes please upload a 
screenshot of the result and add a link in the commit message.
This will allow us to better understand the change and will also allow non-UI 
developers to comment on the change.
Having the screenshot link in the commit message will allow the developer to 
update the screenshot if there are visible changes as a result of the reviews.

If the UI change is very minor or it is only infra and there are no visible 
changes - use Screenshot: N/A.

You can see an example at the recent Task details overview screen push [1]

[1] https://review.openstack.org/#/c/212489/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos] request to merge feature/qos back into master

2015-08-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/13/2015 06:54 AM, Andreas Jaeger wrote:
 On 08/12/2015 09:55 PM, Ihar Hrachyshka wrote:
 Hi all,
 
 with great pleasure, I want to request a coordinated review for 
 merging feature/qos branch back to master:
 
 https://review.openstack.org/#/c/212170/
 
 Great!
 
 Please send also a patch for project-config to remove the special 
 handling of that branch...

Right you are Andreas!

I have some infra/qa patches to enable QoS in gate [1].

Specifically, the order (partially controlled by Depends-On) should be
similar to:

- - merge feature/qos to master: https://review.openstack.org/212170
- - kill project-config hacks: https://review.openstack.org/212475
- - add q-qos support to devstack: https://review.openstack.org/212453
- - enable q-qos in neutron gate: https://review.openstack.org/212464
- - re-enable API tests: https://review.openstack.org/212466

[1]:
https://review.openstack.org/#/q/status:open+topic:bp/quantum-qos-api+ow
ner:%22Ihar+Hrachyshka+%253Cihrachys%2540redhat.com%253E%22,n,z

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVzHryAAoJEC5aWaUY1u57EkoIAIkd7gW0NGfdANkjqlWbeyCG
1PeMr69NsicNqkdzj5lVsXfDf6PxEeq+2wkd2WdfYcflRvSE1gc3RqkQOLZEEEKs
W9Xt5e9IL8s3+Zo6O96hNBKvytEpcvP+CodyqB+DNInhp1gcjLltm1xwSiWsuAn4
um5t0XLb39CG6du/pSReSPbjqgNBM94DfD88NhQ6asJSiQtEgOtz3HD4hzLlAS5A
8WhnlPPCg9bDHGCG/vEmNoEyLUUGSmui3Xy/jWtunH+atRBC/xCvltFPVEWWLtu8
OsiSWDTmt48nDIJomIp1ZBtYXwjvokCbdI3aPJf3E7d9z2X8kGd92gOp+Pg6F6A=
=TXlp
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] change of day for API subteam meeting?

2015-08-13 Thread GHANSHYAM MANN
Sorry for late reply. Monday or Tuesday works fine for me

On Sat, Aug 8, 2015 at 1:48 AM, Sean Dague s...@dague.net wrote:
 Friday's have been kind of a rough day for the Nova API subteam. It's
 already basically the weekend for folks in AP, and the weekend is right
 around the corner for everyone else.

 I'd like to suggest we shift the meeting to Monday or Tuesday in the
 same timeslot (currently 12:00 UTC). Either works for me. Having this
 earlier in the week I also hope keeps the attention on the items we need
 to be looking at over the course of the week.

 If current regular attendees could speak up about day preference, please
 do. We'll make a change if this is going to work for folks.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Thanks  Regards
Ghanshyam Mann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] I have a question about openstack cinder zonemanager driver.

2015-08-13 Thread Chenying (A)
Hi, guys

 I am using Brocade FC switch in my OpenStack environment. I have a 
question about OpenStack cinder zonemanger driver.

I find that [fc-zone-manager] can only configure one zone driver. If I want to 
use two FC switches from different vendors at the same time.

One is Brocade FC switch, the other one is Cisco FC switch. Is there a method 
or solution configure two FC switch zone driver in one cinder.conf ?

I want them both to support Zone Manager.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]Implement the API to createmasterinstance and slave instances with one request

2015-08-13 Thread Doug Shelley
Tobe,

The BP for the feature that added replica_count is here - 
https://github.com/openstack/trove-specs/blob/master/specs/kilo/replication-v2.rst

Your suggestion for changing the semantic of the API is interesting – I would 
interested to know what others in the community thought about this as well. 
Maybe you could file a BP and suggest this improvement?

Regards,
Doug

From: 陈迪豪 chendi...@unitedstack.commailto:chendi...@unitedstack.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, August 13, 2015 at 4:12 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove]Implement the API to createmasterinstance 
and slave instances with one request

We have read the code of replica_count and it's like what I thought.

We have an suggestion to extend this feature. When users set slave_of_id and 
replica_count at the same time, we just create replica instances. If they use 
replica_count without using slave_of_id, we should create an master 
instance for them and some replica instances of it.

For example, trove create $name --replica-of $id --replica_count=2 will 
create 2 replica instances. And trove create $name --replica_count=2 will 
create 1 master instance and 2 replica instances.

What do you think Doug?

Regards,
tobe from UnitedStack

-- Original --
From:  陈迪豪chendi...@unitedstack.commailto:chendi...@unitedstack.com;
Date:  Thu, Aug 13, 2015 12:25 PM
To:  
openstack-dev@lists.openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [trove]Implement the API to createmasterinstance 
and slave instances with one request

Thanks Doug.

It's really helpful and we need this feature as well. Can you point out the bp 
or patch of this?

I think we will add --replica-count parameter within trove create request. So 
trove-api will create trove instance(aync create nova instance) and then create 
some replica trove instances(aync create nova instances). This is really useful 
for web front-end developers to create master and replica instances in the same 
time(they don't want to send multiple requests by themselves).

Regards,
tobe from UnitedStack

-- Original --
From:  Doug Shelleyd...@tesora.commailto:d...@tesora.com;
Date:  Wed, Aug 12, 2015 10:21 PM
To:  
openstack-dev@lists.openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [trove]Implement the API to create masterinstance 
and slave instances with one request

As of Kilo, you can add a —replica-count parameter to trove create —replica-of 
to have it spin up multiple mysql slaves simultaneously. This same construct is 
in the python/REST API as well. I realize that you still need to create a 
master first, but thought I would point this out as it might be helpful to you.

Regards,
Doug


From: 陈迪豪 chendi...@unitedstack.commailto:chendi...@unitedstack.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 11, 2015 at 11:45 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove]Implement the API to create master instance and 
slave instances with one request

Now we can create mysql master instance and slave instance one by one.

It would be much better to allow user to create one master instance and 
multiple slave instances with one request.

Any suggestion about this, the API design or the implementation?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystone v3 problem in Kilo

2015-08-13 Thread Dolph Mathews
https://review.openstack.org/#/c/212515/

On Thu, Aug 13, 2015 at 6:57 AM, Alexandre Levine alexandrelev...@gmail.com
 wrote:

 Hi everybody,

 There is a problem using keystone v3 in Kilo by external EC2 API service.
 The problem doesn't exist for keystone v2 and it is fixed in master for
 keystone v3 by the following patch:

 https://github.com/openstack/python-keystoneclient/commit/f6ab133f25f00e041cd84aa8bbfb422594d1942f

 We do need to have EC2 API working with the keystone v3 in Kilo so the
 question is: would it be possible to backport the patch into Kilo?
 We can create a review and a bug if necessary, we just need to know that
 it's going to be accepted some time soon.

 The problem is in keystoneclient so we do have a workaround which can be
 introduced into our EC2 API code - it'll bypass keystone client and send
 raw REST request. But I understand it's not the most sound approach.

 Best regards,
   Alex Levine


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Liberty-3 BPs and gerrit topics

2015-08-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/11/2015 03:44 PM, Kyle Mestery wrote:
 Folks:
 
 To make reviewing all approved work for Liberty-3 in Neutron
 easier, I've created a handy dandy gerrit dashboard [1]. What will
 make this even more useful is if everyone makes sure to set their
 topics to something uniform from their approved LP BP found here
 [2]. The gerrit dashboard includes all Essential, High, and Medium
 priority BPs from that link. If everyone who has patches could make
 sure their gerrit topics for the patches are synced to what is in
 the LP BP, that will help as people use the dashboard to review in
 the final weeks before FF.
 

Thanks for the dashboard! It would be even more useful if folks got
their patches in good shape in terms of Jenkins votes, since the
dashboard filters out anything that lacks +1 vote from the gate.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVzISvAAoJEC5aWaUY1u571ZsIAL2xfAjmucKfgyVNWlFdiJFX
jSyRhN9Fs/e/+UJzym71OAeIMUP7Ua+FpG3i80Ivo+q7wxqMf4fq5tP7yx0PaWCS
B+7Gb+SUuKxT+QLB3tylH3kTgVZqpaiP8KfeBWHnvzjwxNUvuMHuvnA/2afwXouk
vWrzWaY+AybfkTjmuVNQMxxAqiVm06jpFtQLqAqvytyTrzCAu0JTQLAj50wCtpPU
qanBevXTuHisU+OsRCglrqdq8lCHuLpvjVka1PYVIWQWVLcEjKsNwViNYqzsg/Of
3/jFXuOsn9sHIwOUEbfc1fEl2ZmBqX+bwnvYOBUhvkfSG+NTEmzVaz8ItSxtcbk=
=3WXi
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] keystone v3 problem in Kilo

2015-08-13 Thread Alexandre Levine

Hi everybody,

There is a problem using keystone v3 in Kilo by external EC2 API service. The 
problem doesn't exist for keystone v2 and it is fixed in master for keystone v3 
by the following patch:
https://github.com/openstack/python-keystoneclient/commit/f6ab133f25f00e041cd84aa8bbfb422594d1942f

We do need to have EC2 API working with the keystone v3 in Kilo so the question 
is: would it be possible to backport the patch into Kilo?
We can create a review and a bug if necessary, we just need to know that it's 
going to be accepted some time soon.

The problem is in keystoneclient so we do have a workaround which can be 
introduced into our EC2 API code - it'll bypass keystone client and send raw 
REST request. But I understand it's not the most sound approach.

Best regards,
  Alex Levine


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Unit Test failing due to version conflict

2015-08-13 Thread Flavio Percoco

On 11/08/15 14:23 -0700, Su Zhang wrote:

Hello,

I hit an exception while running glance unit test. It worked well previously
(one week ago) and I guess this is because a version conflict came up recently.
Could anyone give me some suggestions regarding the following issue?


Sorry for the late reply, Su Zhang.

I didn't find a version conflict/error below but a memory error. Are
you sure the box where you were running these tests had enough memory?

Flavio



Successfully installed Babel-2.0 Jinja2-2.8 Mako-1.0.1 MarkupSafe-0.23
MySQL-python-1.2.5 Paste-2.0.2 PasteDeploy-1.5.2 PrettyTable-0.7.2 PyYAML-3.11
Pygments-2.0.2 Routes-2.2 SQLAlchemy-0.9.10 Tempita-0.5.2 WSME-0.7.0
WebOb-1.4.1 aioeventlet-0.4 alembic-0.7.7 amqp-1.4.6 anyjson-0.3.3
argparse-1.2.1 cffi-1.1.2 coverage-3.7.1 cryptography-0.9.3 decorator-4.0.2
discover-0.4.0 docutils-0.12 elasticsearch-1.6.0 enum34-1.0.4 eventlet-0.17.4
extras-0.0.3 fixtures-1.3.1 flake8-2.2.4 funcsigs-0.4 functools32-3.2.3.post2
futures-3.0.3 glance-store-0.4.0 greenlet-0.4.7 hacking-0.10.2 httplib2-0.9.1
idna-2.0 ipaddress-1.0.14 iso8601-0.1.10 jsonschema-2.5.1
keystonemiddleware-1.5.2 kombu-3.0.26 linecache2-1.0.0 mccabe-0.2.1 mock-1.3.0
mox3-0.9.0 msgpack-python-0.4.6 netaddr-0.7.15 netifaces-0.10.4 networkx-1.10
ordereddict-1.1 oslo.concurrency-1.8.2 oslo.config-1.9.3 oslo.context-0.2.0
oslo.db-1.7.2 oslo.i18n-1.5.0 oslo.log-1.0.0 oslo.messaging-1.8.3
oslo.middleware-1.0.0 oslo.policy-0.3.2 oslo.serialization-1.4.0
oslo.utils-1.4.0 oslo.vmware-0.11.2 oslosphinx-2.5.0 oslotest-1.5.2
osprofiler-0.3.0 pbr-0.11.0 pep8-1.5.7 posix-ipc-1.0.0 psutil-1.2.1
psycopg2-2.6.1 pyOpenSSL-0.15.1 pyasn1-0.1.8 pycadf-0.8.0 pycparser-2.14
pycrypto-2.6.1 pyflakes-0.8.1 pysendfile-2.0.0 python-cinderclient-1.3.1
python-keystoneclient-1.3.2 python-mimeparse-0.1.4 python-subunit-1.1.0
python-swiftclient-2.4.0 pytz-2015.4 qpid-python-0.26 repoze.lru-0.6
requests-2.7.0 retrying-1.3.3 semantic-version-2.4.2 simplegeneric-0.8.1
simplejson-3.8.0 six-1.9.0 sphinx-1.2.3 sqlalchemy-migrate-0.9.7
sqlparse-0.1.16 stevedore-1.3.0 suds-0.4 taskflow-0.7.1 testrepository-0.0.20
testresources-0.2.7 testscenarios-0.5.0 testtools-1.8.0 traceback2-1.4.0
trollius-2.0 unittest2-1.1.0 urllib3-1.10.4 xattr-0.7.8
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no previously-included files matching '*.pyc' found anywhere in
distribution
warning: no files found matching 'ChangeLog'
warning: no files found matching 'builddeb.sh'
warning: no files found matching 'AUTHORS'
warning: no files found matching 'run_tests.py'
warning: no files found matching 'ChangeLog'
ERROR:root:Error parsing
Traceback (most recent call last):
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/pbr/core.py,
line 109, in pbr
    attrs = util.cfg_to_args(path)
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/pbr/util.py,
line 245, in cfg_to_args
    kwargs = setup_cfg_to_setup_kwargs(config)
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/pbr/util.py,
line 364, in setup_cfg_to_setup_kwargs
    cmd = cls(dist)
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/setuptools/
__init__.py, line 124, in __init__
    _Command.__init__(self,dist)
  File /usr/lib/python2.7/distutils/cmd.py, line 59, in __init__
    raise TypeError, dist must be a Distribution instance
TypeError: dist must be a Distribution instance
Traceback (most recent call last):
  File setup.py, line 30, in module
    pbr=True)
  File /usr/lib/python2.7/distutils/core.py, line 151, in setup
    dist.run_commands()
  File /usr/lib/python2.7/distutils/dist.py, line 953, in run_commands
    self.run_command(cmd)
  File /usr/lib/python2.7/distutils/dist.py, line 972, in run_command
    cmd_obj.run()
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/setuptools/
command/develop.py, line 32, in run
    self.install_for_development()
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/setuptools/
command/develop.py, line 132, in install_for_development
    self.process_distribution(None, self.dist, not self.no_deps)
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/setuptools/
command/easy_install.py, line 709, in process_distribution
    [requirement], self.local_index, self.easy_install
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/pkg_resources
/__init__.py, line 836, in resolve
    dist = best[req.key] = env.best_match(req, ws, installer)
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/pkg_resources
/__init__.py, line 1081, in best_match
    return self.obtain(req, installer)
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/pkg_resources
/__init__.py, line 1093, in obtain
    return installer(requirement)
  File /opt/stack/glance/.venv/local/lib/python2.7/site-packages/setuptools/
command/easy_install.py, line 629, in easy_install
    return self.install_item(spec, 

Re: [openstack-dev] [keystone] keystone v3 problem in Kilo

2015-08-13 Thread Alexandre Levine

Thank you Dolph.

Best regards,
  Alex Levine

On 8/13/15 4:02 PM, Dolph Mathews wrote:

https://review.openstack.org/#/c/212515/

On Thu, Aug 13, 2015 at 6:57 AM, Alexandre Levine 
alexandrelev...@gmail.com mailto:alexandrelev...@gmail.com wrote:


Hi everybody,

There is a problem using keystone v3 in Kilo by external EC2 API
service. The problem doesn't exist for keystone v2 and it is fixed
in master for keystone v3 by the following patch:

https://github.com/openstack/python-keystoneclient/commit/f6ab133f25f00e041cd84aa8bbfb422594d1942f

We do need to have EC2 API working with the keystone v3 in Kilo so
the question is: would it be possible to backport the patch into Kilo?
We can create a review and a bug if necessary, we just need to
know that it's going to be accepted some time soon.

The problem is in keystoneclient so we do have a workaround which
can be introduced into our EC2 API code - it'll bypass keystone
client and send raw REST request. But I understand it's not the
most sound approach.

Best regards,
  Alex Levine


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Consistent functional test failures (seems infra not have enough resource)

2015-08-13 Thread Jeremy Stanley
On 2015-08-13 19:38:07 +0800 (+0800), Kai Qiang Wu wrote:
 I did talked to infra, which I think it is resource issue, But
 they thought it is nova issue,
[...]

No, I said the error was being raised by Nova, so was not an error
coming _from_ the infrastructure we manage. If your jobs are more
resource-intensive than a typical devstack/tempest job, you'll want
to look at ways to scale them back.

 It is 20GB disk space, so failed for that.

Correct, we run jobs on resources donated by public service
providers. Some of them only provide a 20GB root disk. There's
generally an ephemeral disk mounted at /opt with additional space if
you can modify your job to leverage that for whatever is running out
of space.

 I think it is related with this, the jenkins allocated VM disk
 space is not large. I am curious why it failed so often recently.
 Does os-infra changed something ?

Nothing has been intentionally changed with our disk space on job
workers as far as I'm aware. Different workers have varying root
disk sizes depending on the provider where they were booted, but
they could be as small as 20GB so your job will need to take that
into account.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-13 Thread Flavio Percoco

On 13/08/15 11:11 +, Kuvaja, Erno wrote:




-Original Message-
From: Mike Perez [mailto:thin...@gmail.com]
Sent: Wednesday, August 12, 2015 4:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and
glance

On Wed, Aug 12, 2015 at 2:23 AM, Kuvaja, Erno kuv...@hp.com wrote:
 -Original Message-
 From: Mike Perez [mailto:thin...@gmail.com]
 Sent: 11 August 2015 19:04
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store
 and glance

 On 15:06 Aug 11, Kuvaja, Erno wrote:
   -Original Message-
   From: Jay Pipes [mailto:jaypi...@gmail.com]

 snip

   Having the image cache local to the compute nodes themselves
   gives the best performance overall, and with glance_store, means
   that glance-api isn't needed at all, and Glance can become just a
   metadata repository, which would be awesome, IMHO.
 
  You have any figures to back this up in scale? We've heard similar
  claims for quite a while and as soon as people starts to actually
  look into how the environments behaves, they quite quickly turn
  back. As you're not the first one, I'd like to make the same
  request as to everyone before, show your data to back this claim
  up! Until that it is just like you say it is, opinion.;)

 The claims I make with Cinder doing caching on its own versus just
 using Glance with rally with an 8G image:

 Creating/deleting 50 volumes w/ Cinder image cache: 324 seconds
 Creating/delete 50 volumes w/o Cinder image cache: 3952 seconds

 http://thing.ee/x/cache_results/

 Thanks to Patrick East for pulling these results together.

 Keep in mind, this is using a block storage backend that is
 completely separate from the OpenStack nodes. It's *not* using a
 local LVM all in one OpenStack contraption. This is important because
 even if you have Glance caching enabled, and there was no cache miss,
 you still have to dd the bits to the block device, which is still
 going over the network. Unless Glance is going to cache on the storage
array itself, forget about it.

 Glance should be focusing on other issues, rather than trying to make
 copying image bits over the network and dd'ing to a block device faster.

 --
 Mike Perez

 Thanks Mike,

 So without cinder cache your times averaged roughly 150+second marks.
 The couple of first volumes with the cache took roughly 170+seconds.
 What the data does not tell, was cinder pulling the images directly
 from glance backend rather than through glance on either of these cases?

Oh but I did, and that's the beauty of this, the files marked cinder-cache-
x.html are avoiding Glance as soon as it can, using the Cinder generic image
cache solution [1]. Please reread my when I say Glance is unable to do
caching in a storage array, so we don't rely on Glance. It's too slow otherwise.

Take this example with 50 volumes created from image with Cinder's image
cache
[2]:

* Is using Glance cache (oh no cache miss)
* Downloads the image from whatever glance store
* dd's the bits to the exported block device.
* the bits travel to the storage array that the block device was exported
from.
* [2nd-50th] request of that same image comes, Cinder instead just
references
  a cinder:// endpoint which has the storage array do a copy on write. ZERO
  COPYING since we can clone the image. Just a reference pointer and done,
move
  on.

 Somehow you need to seed those caches and that seeding
time/mechanism
 is where the debate seems to be. Can you afford keeping every image in
 cache so that they are all local or if you need to pull the image to
 seed your cache how much you will benefit that your 100 cinder nodes
 are pulling it directly from backend X versus glance caching/sitting
 in between. How block storage backend handles that 100 concurrent
 reads by different client when you are seeding it between different
 arrays? The scale starts matter here because it makes a lot of
 difference on backend if it's couple of cinder or nova nodes
 requesting the image vs. 100s of them. Lots of backends tends to not
 like such loads or we outperform them due not having to fight for the
bandwidth with other consumers of that backend.

Are you seriously asking if a backend is going to be with stand concurrent
reads compared to Glance cache?

All storage backends do is I/O, unlike Glance which is trying to do a million
things and just pissing off the community.


Thanks, I'm so happy to hear that it's not just couple of us who thinks that 
the project is lacking focus.


This is one of the reasons why it was asked for this email to be sent
rather than making decisions based on assumptions (you're free to read
- or not - the meeting logs).

Saying Glance is just pissing off the community helps spreading a
rumor that has been floating around which causes more harm than good
and I'm not willing to accept that.

If people have 

Re: [openstack-dev] [rally] [Ceilometer] profiler sample resource id

2015-08-13 Thread Roman Vasilets
Hi,
   Could you provide the link to this code?

On Wed, Aug 12, 2015 at 9:22 PM, Pradeep Kilambi pkila...@redhat.com
wrote:

 We're in the process of converting existing meters to use a more
 declarative approach where we add the meter definition as part of a yaml.
 As part of this transition there are few notification handlers where the id
 is not consistent. For example, in profiler notification Handler the
 resource_id is set to profiler-%s % message[payload][base_id] . Is
 there a reason we have the prefix? Can we ignore this and directly set
 to message[payload][base_id] ? Seems like there is no real need for the
 prefix here unless i'm missing something. Can we go ahead and drop this?

 If we don't hear anything i'll assume there is no objection to dropping
 this prefix.


 Thanks,

 --
 --
 Pradeep Kilambi; irc: prad
 OpenStack Engineering

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] new engine is almost here!

2015-08-13 Thread Kirill Zaitsev
This week yaql 1.0.0rc2 has been released 
https://pypi.python.org/pypi/yaql/1.0.0.0rc2 This rc2 would most likely become 
1.0 release.
This also means, that murano is upgrading it’s engine to accommodate changes 
and improvements to new yaql.

If you’re working with upstream/master murano or interested in helping improve 
new engine: please try and test it. Here are  commits, that do the migration: 
murano: https://review.openstack.org/#/c/204099/ 
murano-dashboard: https://review.openstack.org/#/c/203588/
python-muranoclient: https://review.openstack.org/#/c/202684/
Do not forget to install updated client and new yaql to the env with dashboard 
and murano virtual-envs!

If you have any complex custom apps, be sure to test them against the new 
engine, to help us find any bugs we might have missed  there!


-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-08-13 Thread Rich Megginson

On 08/13/2015 12:41 AM, Gilles Dubreuil wrote:

Hi Matthew,

On 11/08/15 01:14, Rich Megginson wrote:

On 08/10/2015 07:46 AM, Matthew Mosesohn wrote:

Sorry to everyone for bringing up this old thread, but it seems we may
need more openstackclient/keystone experts to settle this.

I'm referring to the comments in https://review.openstack.org/#/c/207873/
Specifically comments from Richard Megginson and Gilles Dubreuil
indicating openstackclient behavior for v3 keystone API.


A few items seem to be under dispute:
1 - Keystone should be able to accept v3 requests at
http://keystone-server:5000/http://keystone-server:5000/

I don't think so.  Keystone requires the version suffix /v2.0 or /v3.


Yes, if the public endpoint is set without a version then the service
can deal with either version.

http://paste.openstack.org/raw/412819/

That is not true for the admin endpoint (authentication is already done,
the admin services deals only with tokens), at least for now, Keystone
devs are working on it.


I thought it worked like this - the openstack cli will infer from the 
arguments if it should do v2 or v3 auth.  In the above case for v3, 
since you specify the username/password, osc knows it has to use 
password auth (as opposed to token auth), and since all of the required 
v3 arguments are provided (v3 api version, domains for user/project), it 
can use v3 password auth.  It knows it has to use the /v3/auth/token 
path to get a token.


Similarly for v2, since it only has username/password, no v3 api or v3 
domain arguments, it knows it has to use v2 password auth.  It knows it 
has to use /v2.0/token to get a token.


With the puppet-keystone code, since it uses TOKEN/URL, osc cannot infer 
if it can use v2 or v3, so it requires the /v2.0 or /v3 suffix, and 
the api version.





2 - openstackclient should be able to interpret v3 requests and append
v3/ to OS_AUTH_URL=http://keystone-server.5000/ or rewrite the URL
if it is set as
OS_AUTH_URL=http://keystone-server.5000/http://keystone-server.5000/

It does, if it can determine from the given authentication arguments if
it can do v3 or v2.0.


It effectively does, again, assuming the path doesn't contain a version
number (x.x.x.x:5000)


3 - All deployments require /etc/keystone/keystone.conf with a token
(and not simply use openrc for creating additional endpoints, users,
etc beyond keystone itself and an admin user)

No.  What I said about this issue was Most people using
puppet-keystone, and realizing Keystone resources on nodes that are not
the Keystone node, put a /etc/keystone/keystone.conf on that node with
the admin_token in it.

That doesn't mean the deployment requires /etc/keystone/keystone.conf.
It should be possible to realize Keystone resources on non-Keystone
nodes by using ENV or openrc or other means.


Agreed. Also keystone.conf is used only to bootstrap keystone
installation and create admin users, etc.



I believe it should be possible to set v2.0 keystone OS_AUTH_URL in
openrc and puppet-keystone + puppet-openstacklib should be able to
make v3 requests sensibly by manipulating the URL.

Yes.  Because for the puppet-keystone resource providers, they are coded
to a specific version of the api, and therefore need to be able to
set/override the OS_IDENTITY_API_VERSION and the version suffix in the URL.


No. To support V2 and V3, the OS_AUTH_URL must not contain any version
in order.

The less we deal with the version number the better!


Additionally, creating endpoints/users/roles shouldbe possible via
openrc alone.

Yes.


Yes, the openrc variables are used, if not available then the service
token from the keystone.conf is used.


It's not possible to write single node tests that can demonstrate this
functionality, which is why it probably went undetected for so long.

And since this is supported, we need tests for this.

I'm not sure what's the issue besides the fact keystone_puppet doesn't
generate a RC file once the admin user has been created. That is left to
be done by the composition layer. Although we might want to integrate that.

If that issue persists, assuming the AUTH_URL is free for a version
number and having an openrc in place, we're going to need a bug number
to track the investigation.


If anyone can speak up on these items, it could help influence the
outcome of this patch.

Thank you for your time.

Best Regards,
Matthew Mosesohn


Thanks,
Gilles


On Fri, Jul 31, 2015 at 6:32 PM, Rich Megginson rmegg...@redhat.com
mailto:rmegg...@redhat.com wrote:

 On 07/31/2015 07:18 AM, Matthew Mosesohn wrote:

 Jesse, thanks for raising this. Like you, I should just track
 upstream
 and wait for full V3 support.

 I've taken the quickest approach and written fixes to
 puppet-openstacklib and puppet-keystone:
 https://review.openstack.org/#/c/207873/
 https://review.openstack.org/#/c/207890/

 and again to Fuel-Library:
 

Re: [openstack-dev] [rally] [Ceilometer] profiler sample resource id

2015-08-13 Thread Pradeep Kilambi
On Thu, Aug 13, 2015 at 8:50 AM, Roman Vasilets rvasil...@mirantis.com
wrote:

 Hi,
Could you provide the link to this code?



Here it is:

https://github.com/openstack/ceilometer/blob/master/ceilometer/profiler/notifications.py#L76





 On Wed, Aug 12, 2015 at 9:22 PM, Pradeep Kilambi pkila...@redhat.com
 wrote:

 We're in the process of converting existing meters to use a more
 declarative approach where we add the meter definition as part of a yaml.
 As part of this transition there are few notification handlers where the id
 is not consistent. For example, in profiler notification Handler the
 resource_id is set to profiler-%s % message[payload][base_id] . Is
 there a reason we have the prefix? Can we ignore this and directly set
 to message[payload][base_id] ? Seems like there is no real need for the
 prefix here unless i'm missing something. Can we go ahead and drop this?

 If we don't hear anything i'll assume there is no objection to dropping
 this prefix.


 Thanks,

 --
 --
 Pradeep Kilambi; irc: prad
 OpenStack Engineering

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Pradeep Kilambi; irc: prad
OpenStack Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-13 Thread Mike Perez
On 11:11 Aug 13, Kuvaja, Erno wrote:
 
 That looks interesting, in a very good way.
 
 From the commit message of that review you referred:
 
 These image-volumes are host specific, so each backend may end up with
 its very own image-volume to do clones from.
 
 Does that mean that each cinder host will still need to pull the image from
 glance rather than sharing the cache across the deployment? No pun intended,
 but if so, this is the exact point we are referring here. It doesn't matter
 if it's 100 nova or 100 cinder hosts, they will get the stuff for their local
 caches faster via glance than talking directly to for example swift. And that
 is the step we're talking here all the time. I doubt anyone is willing to
 question the performance benefit of local cache in either nova or cinder
 case. 

 My question was sincere regarding multiple hosts accessing the same cinder
 volume (ref. having glance using cinder backend and having the glance by
 pass), my perhaps false understanding has been that block devices rarely like
 multiple hosts poking them. Not questioning the raw I/O they provide. And if
 my assumption of 100 hosts accessing same volume concurrently causing issues
 was false, then I definitely see the benefit here being able to just give
 cinder reference pointer if cinder backend is in use, but in that case I do
 not see any benefit cinder consuming the glance_store to do those actions.
 I'm pretty sure ye guys know bit better how to optimize your operations than
 we do. And that location data is available via V2 API already if deployer
 allows.
 
 This means using cinder backend in glance and the caching ye guys are working
 on, you do not need glance_store, you can just go and request glance to hand
 over the metadata including the location of the image instead of ask glance
 to provide the image data. Now, last update of cinder driver condition in
 glance_store I've heard wasn't great so that might need bit more work to be
 viable option in production use.
 
 I really appreciate you taking the time to bend the railroad for me(/us),
 Erno

There is no pulling of data except to do the initial cache. I'll explain again:

1) First request comes, Cinder exports a block device onto itself.
2) Cinder gets information from Glance the image is stored in Swift.
3) It does a download and copies the image onto the block device.
4) [2nd-50th] request comes. Cinder does not copy data. There is no moving of
   data happening at all. Cinder simply sends a command to the backend that
   already has the image and does a clone or copy on write. It's a reference
   pointer that the new volume points to image stored in the block storage
   backend.

No data copying. No data reading from Cinder node. What usually happens next is
nova attaches the volume and boots off of it.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] [API]: API to get the list of router ports

2015-08-13 Thread Brian Haley

On 08/13/2015 04:04 AM, Padmanabhan Krishnan wrote:

Hello,
Is there a Neutron public API to get the list of router ports? Something similar
to what the command neutron router-port-list {tenant} gives.


router-port-list takes a router as an argument, not a tenant.


I wasn't able to find one in the Neutron API doc as well as in
neutronclient/v2_0/client.py.

I think with a combination of subnet_show, port_list, one can find the list of
neutron router ports, but just wanted to see if there's an API already 
available.


$ neutron port-list -- --device_owner=network:router_interface

or 'router_interface_distributed' if you have DVR enabled.

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Ethan Gafford for the core reviewer team

2015-08-13 Thread Matthew Farrellee

On 08/13/2015 10:56 AM, Sergey Lukjanov wrote:

Hi folks,

I'd like to propose Ethan Gafford as a member of the Sahara core
reviewer team.

Ethan contributing to Sahara for a long time and doing a great job on
reviewing and improving Sahara. Here are the statistics for reviews
[0][1][2] and commits [3]. BTW Ethan is already stable maint team core
for Sahara.

Existing Sahara core reviewers, please vote +1/-1 for the addition of
Ethan to the core reviewer team.

Thanks.

[0]
https://review.openstack.org/#/q/reviewer:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22,n,z
[1] http://stackalytics.com/report/contribution/sahara-group/90
[2] http://stackalytics.com/?user_id=egaffordmetric=marks
[3]
https://review.openstack.org/#/q/owner:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22+status:merged,n,z

--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.


+1 ethan has really taken to sahara, providing valuable input to both 
development and deployments as well has taking on the manila integration



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Liberty SPFE Request - IDP Specific WebSSO

2015-08-13 Thread Morgan Fainberg
To be fair this is pushing late into the cycle for adding a new target for 
Liberty. We already have a very large body of code that has historically not 
received consistent reviewing. My concern is that we're again rushing things in 
at the wire and will get a substandard implementation. 

I wont block this, but as with the other spec freeze exceptions we will vote at 
the next keystone meeting on accepting this spec freeze exception. 

Please make sure to add it to the weekly meeting and feel free to continue this 
discussion here on the ML to cover justifications etc. 

--Morgan

Sent via mobile

 On Aug 12, 2015, at 16:20, Lance Bragstad lbrags...@gmail.com wrote:
 
 Hey all, 
 
 
 I'd like to propose a spec proposal freeze exception for IDP Specific WebSSO 
 [0].
 
 This topic has been discussed, in length, on the mailing list [1], where this 
 spec has been referenced as a possible solution [2]. This would allow for 
 multiple Identity Providers to use the same protocol. As described on the 
 mailing list, this proposal would help with the public cloud cases for 
 federated authentication workflows, where Identity Providers can't be 
 directly exposed to users. 
 
 The flow would look similar to what we already do for federated 
 authentication [3], but it includes adding a call in step 3. Most of the code 
 for step 3 already exists in Keystone, it would more or less be adding it to 
 the path.
 
 
 Thanks!
 
 
 [0] https://review.openstack.org/#/c/199339/2
 [1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071131.html
 [2] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071571.html
 [3] http://goo.gl/lLbvE1
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Bug triage/fix and doc update days in Liberty

2015-08-13 Thread Sergey Lukjanov
Hi folks,

on todays IRC meeting [0] we've agreed to have:

1) Bug triage day on Aug 17
We're looking for the volunteer to coordinate it ;) If someone wants to do
it, please, reply to this email.
http://etherpad.openstack.org/p/sahara-liberty-bug-triage-day


2) Bug fix day on Aug 24
Ethan (egafford) volunteered to coordinate it.
http://etherpad.openstack.org/p/sahara-liberty-bug-fix-day


3) Doc update day on Aug 31
Mike (elmiko) volunteered to coordinate it.
http://etherpad.openstack.org/p/sahara-liberty-doc-update-day


Coordinators, please, add some initial notes to the ether pads and ensure
that folks will be using them to sync efforts. For communication let's use
#openstack-sahara IRC channel as always.

Thanks.

[0]
http://eavesdrop.openstack.org/meetings/sahara/2015/sahara.2015-08-13-14.00.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] How to handle data containers

2015-08-13 Thread Steven Dake (stdake)
Hi,

There has been a lot of debate in the Kolla community about how to handle data 
containers but it has happened all over the place (irc, various reviews, etc).  
I’d like to have one record of the various tribal knowledge on this subject.

The latest proposal is to create one data container which is very small in 
nature and can be backed up with no VOLUME in the Dockerfile.  Next start that 
container image with different names for all of the various data containers we 
operate.  This should provide a good backup solution for the data containers, 
and removes the current extra stuff hiding in the various API containers that 
we use as data containers now in the ansible deployment.

I have created a blueprint:
https://blueprints.launchpad.net/kolla/+spec/one-data-container

It is in the discussion phase.  If there is any technical reason not to do 
this, lets get it out on the table now, otherwise I will approve the blueprint 
early next week and someone (perhaps me?) will begin the implementation.  The 
goal is just to simplify the code base and get rid of the data container bloat. 
;)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ][third-party-ci]Running custom code before tests

2015-08-13 Thread Asselin, Ramy
This is what we do:
https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample#L23

Ramy

From: Andrey Pavlov [mailto:andrey...@gmail.com]
Sent: Thursday, August 13, 2015 12:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] ][third-party-ci]Running custom code before tests

HI,
this file has changed since yesterday,
New link is 
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/devstack-gate.yaml#L2146
or you can find these lines by yourself -
  export DEVSTACK_LOCAL_CONFIG=CINDER_ISCSI_HELPER=lioadm
  export DEVSTACK_LOCAL_CONFIG+=$'\n'CINDER_LVM_TYPE=thin
I mean that you can try to change CINDER_ISCSI_HELPER in devstack.

On Thu, Aug 13, 2015 at 9:47 AM, Eduard Matei 
eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com wrote:
Hi,

I think you pointed me to the wrong file, the devstack-gate yaml (and line 2201 
contains timestamps).
I need an example of how to configure tempest to use my driver.

I tried EXPORT in the jenkins job (before executing dsvm shell script) but 
looking at the tempest.txt (log) it shows that it still uses the defaults. How 
do i overwrite those defaults?

Thanks,

Eduard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][cinder] taskflow 0.6.1 breaking cinder py26 in stable/juno

2015-08-13 Thread Matt Riedemann



On 8/12/2015 7:04 PM, Robert Collins wrote:

On 13 August 2015 at 10:31, Mike Perez thin...@gmail.com wrote:

On Wed, Aug 12, 2015 at 1:13 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:

Bug reported here:

https://bugs.launchpad.net/taskflow/+bug/1484267

We need a 0.6.2 release of taskflow from stable/juno with the g-r caps (for
networkx specifically) to unblock the cinder py26 job in stable/juno.


Josh Harlow is on vacation.

I asked in #openstack-state-management channel who else can do a
release, but haven't heard back from anyone yet.


The library releases team manages all oslo releases; submit a proposed
release to openstack/releases. I need to pop out shortly but will look
in in my evening to see about getting the release tagged. If Dims or
Doug are around now they can do it too, obviously :)

-Rob




That's the easy part.  The hard part is finding someone that can create 
the stable/juno branch for the taskflow project.  I've only ever seen 
dhellmann do that for oslo libraries.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Proposing Ethan Gafford for the core reviewer team

2015-08-13 Thread Sergey Lukjanov
Hi folks,

I'd like to propose Ethan Gafford as a member of the Sahara core reviewer
team.

Ethan contributing to Sahara for a long time and doing a great job on
reviewing and improving Sahara. Here are the statistics for reviews
[0][1][2] and commits [3]. BTW Ethan is already stable maint team core for
Sahara.

Existing Sahara core reviewers, please vote +1/-1 for the addition of Ethan
to the core reviewer team.

Thanks.

[0]
https://review.openstack.org/#/q/reviewer:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22,n,z
[1] http://stackalytics.com/report/contribution/sahara-group/90
[2] http://stackalytics.com/?user_id=egaffordmetric=marks
[3]
https://review.openstack.org/#/q/owner:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22+status:merged,n,z

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Ethan Gafford for the core reviewer team

2015-08-13 Thread michael mccune

+1, Ethan has been a great asset to the team.

mike

On 08/13/2015 10:56 AM, Sergey Lukjanov wrote:

Hi folks,

I'd like to propose Ethan Gafford as a member of the Sahara core
reviewer team.

Ethan contributing to Sahara for a long time and doing a great job on
reviewing and improving Sahara. Here are the statistics for reviews
[0][1][2] and commits [3]. BTW Ethan is already stable maint team core
for Sahara.

Existing Sahara core reviewers, please vote +1/-1 for the addition of
Ethan to the core reviewer team.

Thanks.

[0]
https://review.openstack.org/#/q/reviewer:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22,n,z
[1] http://stackalytics.com/report/contribution/sahara-group/90
[2] http://stackalytics.com/?user_id=egaffordmetric=marks
[3]
https://review.openstack.org/#/q/owner:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22+status:merged,n,z

--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Ethan Gafford for the core reviewer team

2015-08-13 Thread Trevor McKay
+1, welcome addition

On Thu, 2015-08-13 at 17:56 +0300, Sergey Lukjanov wrote:
 Hi folks,
 
 
 I'd like to propose Ethan Gafford as a member of the Sahara core
 reviewer team.
 
 
 Ethan contributing to Sahara for a long time and doing a great job on
 reviewing and improving Sahara. Here are the statistics for reviews
 [0][1][2] and commits [3]. BTW Ethan is already stable maint team core
 for Sahara.
 
 
 Existing Sahara core reviewers, please vote +1/-1 for the addition of
 Ethan to the core reviewer team.
 
 
 Thanks.
 
 
 [0] https://review.openstack.org/#/q/reviewer:%22Ethan+Gafford+%
 253Cegafford%2540redhat.com%253E%22,n,z
 [1] http://stackalytics.com/report/contribution/sahara-group/90
 [2] http://stackalytics.com/?user_id=egaffordmetric=marks
 [3] https://review.openstack.org/#/q/owner:%22Ethan+Gafford+%
 253Cegafford%2540redhat.com%253E%22+status:merged,n,z
 
 
 -- 
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] new engine is almost here!

2015-08-13 Thread Victor Ryzhenkin
Hi folks!

I would like to share localrc [0] for murano deployment with devstack. Hope 
this helps you to quickly deploy devstack with murano and perform testing.
Before you push the deploy button of devstack, please make sure, that your yaql 
is on 1.0.0rc2 version.
To prevent reinstallation of yaql during deployment, I suggest you to install 
yaql from github. Useful commands you can find and [1]

[0] http://paste.openstack.org/show/412897/
[1] http://paste.openstack.org/show/412899/

Cheers!
-- 
Victor Ryzhenkin
freerunner on #freenode

Включено 13 августа 2015 г. в 17:38:22, Kirill Zaitsev (kzait...@mirantis.com) 
написал:

This week yaql 1.0.0rc2 has been released 
https://pypi.python.org/pypi/yaql/1.0.0.0rc2 This rc2 would most likely become 
1.0 release.
This also means, that murano is upgrading it’s engine to accommodate changes 
and improvements to new yaql.

If you’re working with upstream/master murano or interested in helping improve 
new engine: please try and test it. Here are  commits, that do the migration: 
murano: https://review.openstack.org/#/c/204099/ 
murano-dashboard: https://review.openstack.org/#/c/203588/
python-muranoclient: https://review.openstack.org/#/c/202684/
Do not forget to install updated client and new yaql to the env with dashboard 
and murano virtual-envs!

If you have any complex custom apps, be sure to test them against the new 
engine, to help us find any bugs we might have missed  there!


-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc
__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas] IRC meeting and housekeeping

2015-08-13 Thread Sean M. Collins
I'm also working to register #openstack-fwaas with Freenode's ChanServ -
so if your IRC client is configured to automatically connect to it,
disable it for a day while we regain OP for the channel and register it.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][cinder] taskflow 0.6.1 breaking cinder py26 in stable/juno

2015-08-13 Thread Matt Riedemann



On 8/13/2015 11:12 AM, Joshua Harlow wrote:

On Thu, 13 Aug 2015 09:25:36 -0500
Matt Riedemann mrie...@linux.vnet.ibm.com wrote:




On 8/12/2015 7:04 PM, Robert Collins wrote:

On 13 August 2015 at 10:31, Mike Perez thin...@gmail.com wrote:

On Wed, Aug 12, 2015 at 1:13 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:

Bug reported here:

https://bugs.launchpad.net/taskflow/+bug/1484267

We need a 0.6.2 release of taskflow from stable/juno with the g-r
caps (for networkx specifically) to unblock the cinder py26 job
in stable/juno.


Josh Harlow is on vacation.

I asked in #openstack-state-management channel who else can do a
release, but haven't heard back from anyone yet.


The library releases team manages all oslo releases; submit a
proposed release to openstack/releases. I need to pop out shortly
but will look in in my evening to see about getting the release
tagged. If Dims or Doug are around now they can do it too,
obviously :)

-Rob




That's the easy part.  The hard part is finding someone that can
create the stable/juno branch for the taskflow project.  I've only
ever seen dhellmann do that for oslo libraries.



Checking in, anyone in oslo-core should be able to make this branch
(and a release), but if u guys need me to I can do it later, just
traveling for a little while longer...

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Doug created it, this is the g-r sync:

https://review.openstack.org/#/c/212652/

Once that's merged I'll propose the 0.6.2 release to the releases 
project and ping Doug to tag that.


Then it's all rainbows and unicorns again for cinder in stable/juno!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos] request to merge feature/qos back into master

2015-08-13 Thread Miguel Angel Ajo

I owe you all a video of the feature to show how does it work.
I was supposed to deliver today, but I've been partly sick during
today,

The script is ready, I just have to record and share, hopefully happening
tomorrow (Friday).

Best,
Miguel Ángel

Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/13/2015 06:54 AM, Andreas Jaeger wrote:

On 08/12/2015 09:55 PM, Ihar Hrachyshka wrote:

Hi all,

with great pleasure, I want to request a coordinated review for
merging feature/qos branch back to master:

https://review.openstack.org/#/c/212170/

Great!

Please send also a patch for project-config to remove the special
handling of that branch...


Right you are Andreas!

I have some infra/qa patches to enable QoS in gate [1].

Specifically, the order (partially controlled by Depends-On) should be
similar to:

- - merge feature/qos to master: https://review.openstack.org/212170
- - kill project-config hacks: https://review.openstack.org/212475
- - add q-qos support to devstack: https://review.openstack.org/212453
- - enable q-qos in neutron gate: https://review.openstack.org/212464
- - re-enable API tests: https://review.openstack.org/212466

[1]:
https://review.openstack.org/#/q/status:open+topic:bp/quantum-qos-api+ow
ner:%22Ihar+Hrachyshka+%253Cihrachys%2540redhat.com%253E%22,n,z

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVzHryAAoJEC5aWaUY1u57EkoIAIkd7gW0NGfdANkjqlWbeyCG
1PeMr69NsicNqkdzj5lVsXfDf6PxEeq+2wkd2WdfYcflRvSE1gc3RqkQOLZEEEKs
W9Xt5e9IL8s3+Zo6O96hNBKvytEpcvP+CodyqB+DNInhp1gcjLltm1xwSiWsuAn4
um5t0XLb39CG6du/pSReSPbjqgNBM94DfD88NhQ6asJSiQtEgOtz3HD4hzLlAS5A
8WhnlPPCg9bDHGCG/vEmNoEyLUUGSmui3Xy/jWtunH+atRBC/xCvltFPVEWWLtu8
OsiSWDTmt48nDIJomIp1ZBtYXwjvokCbdI3aPJf3E7d9z2X8kGd92gOp+Pg6F6A=
=TXlp
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][cinder] taskflow 0.6.1 breaking cinder py26 in stable/juno

2015-08-13 Thread Thierry Carrez
Joshua Harlow wrote:
 On Thu, 13 Aug 2015 09:25:36 -0500
 Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 


 On 8/12/2015 7:04 PM, Robert Collins wrote:
 On 13 August 2015 at 10:31, Mike Perez thin...@gmail.com wrote:
 On Wed, Aug 12, 2015 at 1:13 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 Bug reported here:

 https://bugs.launchpad.net/taskflow/+bug/1484267

 We need a 0.6.2 release of taskflow from stable/juno with the g-r
 caps (for networkx specifically) to unblock the cinder py26 job
 in stable/juno.

 Josh Harlow is on vacation.

 I asked in #openstack-state-management channel who else can do a
 release, but haven't heard back from anyone yet.

 The library releases team manages all oslo releases; submit a
 proposed release to openstack/releases. I need to pop out shortly
 but will look in in my evening to see about getting the release
 tagged. If Dims or Doug are around now they can do it too,
 obviously :)


 That's the easy part.  The hard part is finding someone that can
 create the stable/juno branch for the taskflow project.  I've only
 ever seen dhellmann do that for oslo libraries.
 
 Checking in, anyone in oslo-core should be able to make this branch
 (and a release), but if u guys need me to I can do it later, just
 traveling for a little while longer...

I can do it too, just give me a SHA and I'll make it happen.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] I have a question about openstack cinder zonemanager driver.

2015-08-13 Thread Jay S. Bryant
Danny is correct.  You cannot have two different Zone Manager drivers 
configured for one volume process.


Jay

On 08/13/2015 11:00 AM, Daniel Wilson wrote:
I am fairly certain you cannot currently use two different FC switch 
zone drivers in one cinder.conf.  In this case it looks like you would 
need two cinder nodes, one for Brocade fabric and one for Cisco fabric.


Thanks,
Danny

On Thu, Aug 13, 2015 at 2:43 AM, Chenying (A) ying.c...@huawei.com 
mailto:ying.c...@huawei.com wrote:


Hi, guys

 I am using Brocade FC switch in my OpenStack environment. I
have a question about OpenStack cinder zonemanger driver.

I find that *[fc-zone-manager] *can only configure one zone
driver. If I want to use two FC switches from different vendors at
the same time.

One is Brocade FC switch, the other one is Cisco FC switch. Is
there a method or solution configure two FC switch zone driver in
one cinder.conf ?

I want them both to support Zone Manager.

**

**


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Consistent functional test failures

2015-08-13 Thread Clark Boylan
On Thu, Aug 13, 2015, at 03:13 AM, Tom Cammann wrote:
 Hi Team,
 
 Wanted to let you know why we are having consistent functional test 
 failures in the gate.
 
 This is being caused by Nova returning No valid host to heat:
 
 2015-08-13 08:26:16.303 31543 INFO heat.engine.resource [-] CREATE: 
 Server kube_minion [12ab45ef-0177-4118-9ba0-3fffbc3c1d1a] Stack 
 testbay-y366b2atg6mm-kube_minions-cdlfyvhaximr-0-dufsjliqfoet 
 [b40f0c9f-cb54-4d75-86c3-8a9f347a27a6]
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource Traceback (most 
 recent call last):
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
 /opt/stack/new/heat/heat/engine/resource.py, line 625, in
 _action_recorder
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
 /opt/stack/new/heat/heat/engine/resource.py, line 696, in _do_action
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield 
 self.action_handler_task(action, args=handler_args)
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
 /opt/stack/new/heat/heat/engine/scheduler.py, line 320, in wrapper
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource step = 
 next(subtask)
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
 /opt/stack/new/heat/heat/engine/resource.py, line 670, in 
 action_handler_task
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource while not 
 check(handler_data):
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
 /opt/stack/new/heat/heat/engine/resources/openstack/nova/server.py, 
 line 759, in check_create_complete
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource return 
 self.client_plugin()._check_active(server_id)
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File 
 /opt/stack/new/heat/heat/engine/clients/os/nova.py, line 232, in 
 _check_active
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource 'code': 
 fault.get('code', _('Unknown'))
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource 
 ResourceInError: Went to status ERROR due to Message: No valid host was 
 found. There are not enough hosts available., Code: 500
 
 And this in turn is being caused by the compute instance running out of 
 disk space:
 
 2015-08-13 08:26:15.216 DEBUG nova.filters 
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Starting with 1 
 host(s) get_filtered_objects /opt/stack/new/nova/nova/filters.py:70
 2015-08-13 08:26:15.217 DEBUG nova.filters 
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter 
 RetryFilter returned 1 host(s) get_filtered_objects 
 /opt/stack/new/nova/nova/filters.py:84
 2015-08-13 08:26:15.217 DEBUG nova.filters 
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter 
 AvailabilityZoneFilter returned 1 host(s) get_filtered_objects 
 /opt/stack/new/nova/nova/filters.py:84
 2015-08-13 08:26:15.217 DEBUG nova.filters 
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter RamFilter 
 returned 1 host(s) get_filtered_objects 
 /opt/stack/new/nova/nova/filters.py:84
 2015-08-13 08:26:15.218 DEBUG nova.scheduler.filters.disk_filter 
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] 
 (devstack-trusty-rax-dfw-4299602, devstack-trusty-rax-dfw-4299602) 
 ram:5172 disk:17408 io_ops:0 instances:1 does not have 20480 MB usable 
 disk, it only has 17408.0 MB usable disk. host_passes 
 /opt/stack/new/nova/nova/scheduler/filters/disk_filter.py:60
 2015-08-13 08:26:15.218 INFO nova.filters 
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter DiskFilter 
 returned 0 hosts
 
 For now a recheck seems to work about 1 in 2, so we can still land
 patches.
 
 The fix for this could be to clean up our Magnum devstack install more 
 aggressively, which might be as simple as cleaning up the images we use, 
 or get infra to provide our tests with a larger disk size. I will 
 probably test out a patch today which cleans up the images we use in 
 devstack to see if that helps.
 
It is not trivial to provide your tests with more disk as we are using
the flavors appropriate for our RAM and CPU needs and are constrained by
quotas in the clouds we use. Do you really need 20GB nested test
instances? The VMs these jobs run on have ~13GB images which is almost
half the size of the instances you are trying to boot there. I would
definitely look into trimming the disk requirements for the nested VMs
before anything else.

As for working ~50% of the time hpcloud gives us more disk than
rackspace which is likely why you see about half fail and half pass. The
runs that pass probably run on hpcloud VMs.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Brocade CI

2015-08-13 Thread Nagendra Jaladanki
Ramy,

Thanks for providing the correct message. We will update our commit message
accordingly.

Thanks,
Nagendra Rao


On Thu, Aug 13, 2015 at 4:43 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

 Hi Nagendra,



 Seems one of the issues is the format of the posted comments. The correct
 format is documented here [1]



 Notice the format is not correct:

 Incorrect: Brocade Openstack CI (non-voting) build SUCCESS logs at:
 http://144.49.208.28:8000/build_logs/2015-08-13_18-19-19/

 Correct: * test-name-no-spaces http://link.to/result : [SUCCESS|FAILURE]
 some comment about the test



 Ramy



 [1]
 http://docs.openstack.org/infra/system-config/third_party.html#posting-result-to-gerrit



 *From:* Nagendra Jaladanki [mailto:nagendra.jalada...@gmail.com]
 *Sent:* Wednesday, August 12, 2015 4:37 PM
 *To:* OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 *Cc:* brocade-openstack...@brocade.com
 *Subject:* Re: [openstack-dev] [cinder] Brocade CI



 Mike,

 Thanks for your feedback and suggestions. I had send my response yesterday
 but looks like didn't get posted on the lists.openstack.org. Hence
 posting it here again.

 We reviewed your comments and following issues were identified and some of
 them are fixed and some fix plans in progress:


 1) Not posting success or failure

  The Brocade CI is a non-voting CI. The CI is posting the comment for
 build sucucess or failures. The report tool is not seeing these. We are
 working on correcting this.
 2) Not posting a result link to view logs.

We could not find any cases where CI is failed to post the link to logs
 from the generated report.  If you have any specific uses where it failed
 to post logs link, please share with us. But we did see that CI not posted
 the comment at all for some review patch sets. Root causing the issue why
 CI not posted the comment at all.
 3) Not consistently doing runs.

There were planned down times and CI not posted during those periods.
 We also observed that CI was not posting the failures in some cases where
 CI failed due non openstack issues. We corrected this. Now the CI should be
 posting the results for all patch sets either success or failure.

 We are also doing the following:

 - Enhance the message format to be inline with other CIs.

 - Closely monitoring the incoming Jenkin's request vs out going builds and
 correcting if there are any issues.



 Once again thanks for your feedback and suggestions. We will continue to
 post this list on the updates.


 Thanks  Regards,

 Nagendra Rao Jaladanki

 Manager, Software Engineering Manageability Brocade

 130 Holger Way, San Jose, CA 95134



 On Sun, Aug 9, 2015 at 5:34 PM, Mike Perez thin...@gmail.com wrote:

 People have asked me at the Cinder midcycle sprint to look at the Brocade
 CI
 to:

 1) Keep the zone manager driver in Liberty.
 2) Consider approving additional specs that we're submitted before the
deadline.

 Here are the current problems with the last 100 runs [1]:

 1) Not posting success or failure.
 2) Not posting a result link to view logs.
 3) Not consistently doing runs. If you compare with other CI's there are
 plenty
missing in a day.

 This CI does not follow the guidelines [2]. Please get help [3].

 [1] - http://paste.openstack.org/show/412316/
 [2] -
 http://docs.openstack.org/infra/system-config/third_party.html#requirements
 [3] -
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]password for registry v2

2015-08-13 Thread Hongbin Lu
Hi Wanghua,

For the question about how to pass user password to bay nodes, there are 
several options here:

1.   Directly inject the password to bay nodes via cloud-init. This might 
be the simplest solution. I am not sure if it is OK in security aspect.

2.   Inject a scoped Keystone trust to bay nodes and use it to fetch user 
password from Barbican (suggested by Adrian).

3.   Leverage the solution proposed by Kevin Fox [1]. This might be a 
long-term solution.

For the security concerns about storing credential in a config file, I need 
clarification. What is the config file? Is it a dokcer registry v2 config file? 
I guess the credential stored there will be used to talk to swift. Is that 
correct? In general, it is insecure to store user credential inside a VM, 
because anyone can take a snapshot of the VM and boot another VM from the 
snapshot. Maybe storing a scoped credential in the config file could mitigate 
the security risk. Not sure if there is a better solution.

[1] https://review.openstack.org/#/c/186617/

Best regards,
Hongbin

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: August-13-15 4:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum]password for registry v2

Hi all,

In order to add registry v2 to bay nodes[1], authentication information is 
needed for the registry to upload and download files from swift. The swift 
storage-driver in registry now needs the parameters as described in [2]. User 
password is needed. How can we get the password?

1. Let user pass password in baymodel-create.
2. Use user token to get password from keystone

Is it suitable to store user password in db?

It may be insecure to store password in db and expose it to user in a config 
file even if the password is encrypted. Heat store user password in db before, 
and now change to keystone trust[3]. But if we use keystone trust, the swift 
storage-driver does not support it. If we use trust, we expose magnum user's 
credential in a config file, which is also insecure.

Is there a secure way to implement this bp?

[1] https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master
[2] 
https://github.com/docker/distribution/blob/master/docs/storage-drivers/swift.md
[3] https://wiki.openstack.org/wiki/Keystone/Trusts

Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] State of Puppet OpenStack Integration

2015-08-13 Thread Emilien Macchi
This is a status of our work to bring Integration jobs in our CI:

* we successfully deploy a stack with Keystone WSGI and running tempest
tests (v2 and v3 API) on both centos7  trusty (kilo)
* blocker: we need to bump puppetlabs-mysql:
https://review.openstack.org/209209
* blocker: we need to decide where to put bash scripts:
https://review.openstack.org/210380
* need review on deployment ( https://review.openstack.org/207070 ) and
testing ( https://review.openstack.org/207078 )

Please be kind with the last point: we are building a proof of concept with
basic design. Of course we will improve the way to run Tempest (Tempest
team is actually helping us) so we can be more flexible to cover different
use cases.
The scenario is quite basic right now, and this is what we want. We will
improve it as soon as we have feedback after some tests in our CI.

Thank you for your help,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Possible issues cryptography 1.0 and os-client-config 1.6.2

2015-08-13 Thread Robert Collins
On 14 August 2015 at 11:05, Sean M. Collins s...@coreitpro.com wrote:
 On August 13, 2015 3:45:03 AM EDT, Robert Collins
 robe...@robertcollins.net wrote:

 tl;dr - developers running devstack probably want to be using
 USE_CONSTRAINTS=True, otherwise there may be a few rough hours today.
 [http://lists.openstack.org/pipermail/openstack-dev/2015-July/068569.html]


 Apparently some third party CI systems are seeing an issue with
 cryptography 1.0 - https://review.openstack.org/#/c/212349/ - but I
 haven't reproduced it in local experiments. We haven't seen it in the
 gate yet (according to logstash.o.o). Thats likely due to it only
 affecting devstack in that way, and constraints insulating us from it.

 Hopefully the cryptography thing, whatever it is, won't affect unit
 tests which are not yet using constraints (but thats being worked on
 as fast as we can!)


 There was a separate issue w/ os-client-config
 that blew up on 1.6.2,
 but thats been reproduced and dealt with - though the commit hasn't
 gotten through the gate yet, so its possible we'll be dealing with a
 dual-defect problem.

 That said, again, constraints correctly insulated devstack from the
 1.6.2 release - we detected it when the update proposal failed, rather
 than in the gate queue, so \o/.

 AFAICT the os-client-config thing won't affect any unit tests.

 -Rob


 Do we want to make it a default to True for users of DevStack? I think it
 may be worth considering since I'm not familiar with this issue with crypto
 and having something that protects devs who are trying to work on stuff and
 using DevStack , sounds good. Less cryptic day to day breakage for new
 developers just getting started and using DevStack is a goal I have, long
 term (where possible).

 Putting it another way, do you believe that *not* using constraints is the
 exception, not the norm?

I think it would make sense to promote this to defaut now that
we've had it in place for a month without the world ending; and now
that we've got the updates and syncing with new releases all moving
smoothly.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Liberty support in Puppet modules: blocked by packaging

2015-08-13 Thread Emilien Macchi
All packaging issues have been addressed. Our only Puppet blocker is in
puppet-manila: https://review.openstack.org/212680

Once all CI jobs (specially beaker) are positive on
https://review.openstack.org/#/q/status:open+branch:master+topic:puppet/acceptance-liberty,n,z
- we can think at moving our CI forward to Liberty. Please take a lot at
all these patches if we want to make progress.

Kudos to ubuntu/fedora packagers for their help!

On Tue, Aug 4, 2015 at 11:49 PM, Emilien Macchi emilien.mac...@gmail.com
wrote:

 I made progress today: I tested Liberty with our all Puppet modules that
 have acceptance tests, using RDO trunk (centos) and Liberty Staging
 (trusty).

 This is the status of the patches:
 https://review.openstack.org/#/q/status:open+branch:master+topic:puppet/acceptance-liberty,n,z

 And the blockers: http://paste.openstack.org/show/vQeYnl0N9ukliKu5Y3DJ/

 I'm working closely with Packaging maintainers, and I don't see huge
 blockers. We can probably think having something usable in one week or so.

 On Tue, Jul 28, 2015 at 4:07 PM, Emilien Macchi emilien.mac...@gmail.com
 wrote:

 Our current modules are officially supporting Kilo.
 We would like to move our master branch forward to Liberty support but
 since we rely on RDO/UCO packages [1] and RDO is AFIK not providing
 'stable enough' liberty packages, we can't merge any feature that is
 'liberty only', since we can't actually test it.

 That's the current situation and we are working to get this solved as
 soon as possible so we can go ahead.
 I would ask our contributors to understand this situation and make their
 patch work in progress when then break Kilo configuration.

 Thanks,

 [1]

 http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/master-policy.html#proposed-change
 ---
 Emilien Macchi




 --
 Emilien Macchi




-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Compass] Call for contributors

2015-08-13 Thread Weidong Shao
Jesse,

Many thanks for your message and the support of this planned integration
work. I added Xicheng, and Justin, who are looking into getting Compass
deployment to work with OSAD as of now.

Sorry I missed your meeting today and we will definitely join your meeting
next week. In the meantime, we will reach out to you if we run into any
issues.

Thanks,
Weidong

On Wed, Aug 12, 2015 at 11:20 PM Jesse Pretorius jesse.pretor...@gmail.com
wrote:

 On 12 August 2015 at 17:23, Weidong Shao weidongs...@gmail.com wrote:


 Compass is not new to OpenStack community. We started it as an OpenStack
 deployment tool at the HongKong summit. We then showcased it at the Paris
 summit.

 However, the project has gone through some changes recently. We'd like to
 re-introduce Compass and welcome new developers to expand our efforts,
 share in its design, and advance its usefulness to the OpenStack community.

 We intend to follow the 4 openness guidelines and enter the Big Tent.
 We have had some feedback from TC reviewers and others and realize we have
 some work to do to get there. More developers interested in working on the
 project will get us there easier.

 Besides the openness Os, there is critical developer work we need to get
 to one of the OpenStack Os.  For example, we have forked Chef cookbooks,
 and Ansible written from scratch for OpenStack deployment. We need to merge
 the Compass Ansible playbooks back to openstack upstream repo
 (os-ansible-deployment).

 We also need to reach out to other related projects, such as Ironic, to
 make sure that where our efforts overlap, we provided added value, not
 different ways of doing the same thing.

 Lot of work we think will add to the OpenStack community.


- The project wiki page is at https://wiki.openstack.org/wiki/Compass
- The launchpad is: https://launchpad.net/compass
- The weekly IRC meeting is on openstack-meeting4 0100 Thursdays UTC
(or Wed 6pm PDT)
- Code repo is under stackforge
https://github.com/stackforge/compass-core
https://github.com/stackforge/compass-web
https://github.com/stackforge/compass-adapters

 Hi Weidong,

 This looks like an excellent project and we (the openstack-ansible
 project) love to assist you with the integration of Compass with
 openstack-ansible (aka os-ansible-deployment).

 I'd like to discuss with your team how we can work together to facilitate
 Compass' consumption of the playbooks/roles we produce in a suitable way
 and will try to attend the next meeting as I seem to have missed this
 week's meeting). We'd like to understand the project's needs so that we can
 work towards defined goals to accommodate them, while also maintaining our
 stability for other downstream consumers.

 We also invite you to attend our next meeting on Thu 16:00 UTC in
 #openstack-meeting-4 - details are here for reference:
 https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Community_Meeting

 Looking forward to working with you!

 Best regards,

 Jesse

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Possible issues cryptography 1.0 and os-client-config 1.6.2

2015-08-13 Thread Sean M. Collins
On August 13, 2015 3:45:03 AM EDT, Robert Collins robe...@robertcollins.net 
wrote:
tl;dr - developers running devstack probably want to be using
USE_CONSTRAINTS=True, otherwise there may be a few rough hours today.
[http://lists.openstack.org/pipermail/openstack-dev/2015-July/068569.html]


Apparently some third party CI systems are seeing an issue with
cryptography 1.0 - https://review.openstack.org/#/c/212349/ - but I
haven't reproduced it in local experiments. We haven't seen it in the
gate yet (according to logstash.o.o). Thats likely due to it only
affecting devstack in that way, and constraints insulating us from it.

Hopefully the cryptography thing, whatever it is, won't affect unit
tests which are not yet using constraints (but thats being worked on
as fast as we can!)


There was a separate issue w/ os-client-config that blew up on 1.6.2,
but thats been reproduced and dealt with - though the commit hasn't
gotten through the gate yet, so its possible we'll be dealing with a
dual-defect problem.

That said, again, constraints correctly insulated devstack from the
1.6.2 release - we detected it when the update proposal failed, rather
than in the gate queue, so \o/.

AFAICT the os-client-config thing won't affect any unit tests.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Do we want to make it a default to True for users of DevStack? I think it may 
be worth considering since I'm not familiar with this issue with crypto and 
having something that protects devs who are trying to work on stuff and using 
DevStack , sounds good. Less cryptic day to day breakage for new developers 
just getting started and using DevStack is a goal I have, long term (where 
possible).

Putting it another way, do you believe that *not* using constraints is the 
exception, not the norm?


-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Brocade CI

2015-08-13 Thread Asselin, Ramy
Hi Nagendra,

Seems one of the issues is the format of the posted comments. The correct 
format is documented here [1]

Notice the format is not correct:
Incorrect: Brocade Openstack CI (non-voting) build SUCCESS logs at: 
http://144.49.208.28:8000/build_logs/2015-08-13_18-19-19/
Correct: * test-name-no-spaces http://link.to/result : [SUCCESS|FAILURE] some 
comment about the test

Ramy

[1] 
http://docs.openstack.org/infra/system-config/third_party.html#posting-result-to-gerrit

From: Nagendra Jaladanki [mailto:nagendra.jalada...@gmail.com]
Sent: Wednesday, August 12, 2015 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: brocade-openstack...@brocade.com
Subject: Re: [openstack-dev] [cinder] Brocade CI

Mike,

Thanks for your feedback and suggestions. I had send my response yesterday but 
looks like didn't get posted on the 
lists.openstack.orghttp://lists.openstack.org. Hence posting it here again.

We reviewed your comments and following issues were identified and some of them 
are fixed and some fix plans in progress:

1) Not posting success or failure
 The Brocade CI is a non-voting CI. The CI is posting the comment for build 
sucucess or failures. The report tool is not seeing these. We are working on 
correcting this.
2) Not posting a result link to view logs.
   We could not find any cases where CI is failed to post the link to logs from 
the generated report.  If you have any specific uses where it failed to post 
logs link, please share with us. But we did see that CI not posted the comment 
at all for some review patch sets. Root causing the issue why CI not posted the 
comment at all.
3) Not consistently doing runs.
   There were planned down times and CI not posted during those periods. We 
also observed that CI was not posting the failures in some cases where CI 
failed due non openstack issues. We corrected this. Now the CI should be 
posting the results for all patch sets either success or failure.
We are also doing the following:
- Enhance the message format to be inline with other CIs.
- Closely monitoring the incoming Jenkin's request vs out going builds and 
correcting if there are any issues.

Once again thanks for your feedback and suggestions. We will continue to post 
this list on the updates.

Thanks  Regards,

Nagendra Rao Jaladanki

Manager, Software Engineering Manageability Brocade

130 Holger Way, San Jose, CA 95134

On Sun, Aug 9, 2015 at 5:34 PM, Mike Perez 
thin...@gmail.commailto:thin...@gmail.com wrote:
People have asked me at the Cinder midcycle sprint to look at the Brocade CI
to:

1) Keep the zone manager driver in Liberty.
2) Consider approving additional specs that we're submitted before the
   deadline.

Here are the current problems with the last 100 runs [1]:

1) Not posting success or failure.
2) Not posting a result link to view logs.
3) Not consistently doing runs. If you compare with other CI's there are plenty
   missing in a day.

This CI does not follow the guidelines [2]. Please get help [3].

[1] - http://paste.openstack.org/show/412316/
[2] - 
http://docs.openstack.org/infra/system-config/third_party.html#requirements
[3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-13 Thread Brian Rosmaita
On 8/13/15, 11:32 AM, Mike Perez thin...@gmail.com wrote:
* Role based properties [5] (who is asking for this, and why is Glance
  enforcing roles?)
I can answer this one.

Property protections have been available in Glance since Havana [6].
The feature was requested by deployers.
In general, Glance enforces roles because when users make requests it
needs a basis on which to decide whether they should be satisfied;
property protections are a specific application of that principle.

[5] - https://review.openstack.org/#/c/211201/1
[6] 
https://blueprints.launchpad.net/glance/+spec/api-v2-property-protection

cheers,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor for core

2015-08-13 Thread Tom Barron
On 8/13/15 3:13 PM, Mike Perez wrote:
 It gives me great pleasure to nominate Gorka Eguileor for Cinder core.
 
 Gorka's contributions to Cinder core have been much apprecated:
 
 https://review.openstack.org/#/q/owner:%22Gorka+Eguileor%22+project:openstack/cinder,p,0035b6410002dd11
 
 60/90 day review stats:
 
 http://russellbryant.net/openstack-stats/cinder-reviewers-60.txt
 http://russellbryant.net/openstack-stats/cinder-reviewers-90.txt
 
 Cinder core, please reply with a +1 for approval. This will be left
 open until August 19th. Assuming there are no objections, this will go
 forward after voting is closed.
 

Not a cinder core, but I've found Gorka's reviews helpful and instructive.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][kuryr][magnum] Design Specification for Kuryr

2015-08-13 Thread Adrian Otto
Kyle,

Can we please arrange for an official design specification for Kuyyr so members 
of the Magnum team can relay input to be sure that our mutual interests in this 
work are addressed?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][kuryr][magnum] Magnum/Kuryr Integration

2015-08-13 Thread Daneyon Hansen (danehans)

The Magnum Networking Subteam just concluded our weekly meeting. Feel free to 
review the logs[1], as Kuryr integration was an agenda topic that drew 
considerable discussion. An etherpad[2] has been created to foster 
collaboration on the topic. Kuryr integration is scheduled as a topic for next 
week’s agenda. It would be a big help if the Kuryr team can review the etherpad 
and have representation during next week's meeting[3]. I look forward to our 
continued collaboration.

[1] 
http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-08-13-18.00.log.txt
[2] https://etherpad.openstack.org/p/magnum-kuryr
[3] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Proposing Gorka Eguileor for core

2015-08-13 Thread Mike Perez
It gives me great pleasure to nominate Gorka Eguileor for Cinder core.

Gorka's contributions to Cinder core have been much apprecated:

https://review.openstack.org/#/q/owner:%22Gorka+Eguileor%22+project:openstack/cinder,p,0035b6410002dd11

60/90 day review stats:

http://russellbryant.net/openstack-stats/cinder-reviewers-60.txt
http://russellbryant.net/openstack-stats/cinder-reviewers-90.txt

Cinder core, please reply with a +1 for approval. This will be left
open until August 19th. Assuming there are no objections, this will go
forward after voting is closed.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kuryr][magnum] Design Specification for Kuryr

2015-08-13 Thread Kyle Mestery
On Thu, Aug 13, 2015 at 1:56 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

 Kyle,

 Can we please arrange for an official design specification for Kuyyr so
 members of the Magnum team can relay input to be sure that our mutual
 interests in this work are addressed?


I agree 100%. See Gals email here [1] for a status update. At this point
the team is collecting ideas in an etherpad. Gal, can I request that you
formulate those into a spec of some sort, it makes sense to file that in
the Neutron spec repo at this point. We can all then review it there.

Thanks!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071940.html


 Thanks,

 Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] federation

2015-08-13 Thread Navid Pustchi
Hi
I am setting up three keystones to be federated, getting a federated token with 
a federated token.
I have three devstack kilo instances as:
kilo1 (IdP) - kilo2 (SP / IdP) - kilo3 (SP)
1. get a federated scoped token for a project in kilo2.
2. use this federated token and get federated scoped token for kilo3.
The issue is when a SP is setup to be idp as well service provider (for kilo3) 
in kilo2, then i get http 500 internal server error.
The responses up to the error is in the following 
link:http://paste.openstack.org/show/412951/
I realized if remove service provider (form kilo2) then it works fine, service 
provider is in line 18 of the results.

Thank you   

 Navid Pustchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-13 Thread Kevin Benton
+1

On Wed, Aug 12, 2015 at 10:43 PM, Akihiro Motoki amot...@gmail.com wrote:

 +1 for both.

 2015-08-12 22:45 GMT+09:00 Kyle Mestery mest...@mestery.com:

 It gives me great pleasure to propose Russell Bryant and Brandon Logan as
 core reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have
 both been incredible contributors to Neutron for a while now. Their
 expertise has been particularly helpful in the area they are being proposed
 in. Their review stats [1] place them both comfortably in the range of
 existing Neutron core reviewers. I expect them to continue working with all
 community members to drive Neutron forward for the rest of Liberty and into
 Mitaka.

 Existing DB/API/RPC core reviewers (and other Neutron core reviewers),
 please vote +1/-1 for the addition of Russell and Brandon.

 Thanks!
 Kyle

 [1] http://stackalytics.com/report/contribution/neutron-group/90

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-13 Thread Flavio Percoco

On 13/08/15 08:32 -0700, Mike Perez wrote:

On 14:48 Aug 13, Flavio Percoco wrote:


This is one of the reasons why it was asked for this email to be sent
rather than making decisions based on assumptions (you're free to read
- or not - the meeting logs).

Saying Glance is just pissing off the community helps spreading a
rumor that has been floating around which causes more harm than good
and I'm not willing to accept that.

If people have problems with were Glance is headed then please, speak
up. I've done that for Glance, Zaqar, Trove and several other
projects. I expect the same courtesy from other folks in this
community, especially folks in leadership positions.


I got zero responses on the mailing list raising a problem with Glance v2 [1].

I got zero responses on cross project meeting raising a problem with Glance v2
[2].

I'm very happy with my choice of words, because I think this hand slap on
Glance is the first time I got acknowledgement in my frustration with Glance.


This is the first time I hear about your frustration, TBH. But I'm
happy you brought it up.


I should not have to attend a Glance meeting to get someone to fix Glance v2
integration issues in Cinder.


Fully agreed and I'd like to add that reaching out on the mailing list
and in the cross-project meeting should be more than enough to get
attention on cross-project issues.

As a plus, popping up in the channel in case of neither of the above
worked is also good. Like me, some folks could have missed the thread
and find the time of the cross-project meeting very inconvenient.



Glance is trying to increase v2 integration with Nova besides show [3], but
I would recommend Nova to not accept further v2 integration until Glance has
figured out how to handle issues in projects that already have v2 integration.

To start, Glance should assign a cross project liaison [4] to actually respond
to integration issues in Cinder.


Fully agreed, again.


Having focuses on the following is not helping:

* Artifacts (I honestly don't know what this is and failed to find an
 explanation that's *not* some API doc).


Here I disagree. As of now, artifacts is what, at least for me, keeps
the hope that Glance will go back to be what it used to be. The
difference is that artifacts will do the registry work in a more
generic way, rather than focusing just on images.

Some people might disagree with the above being necessary at all, I
just see it as a way to clean up part of the mess you mentioned and as
a way to cover other areas of interest for the community.


* Tagging
* Role based properties [5] (who is asking for this, and why is Glance
 enforcing roles?)


Erm, no idea what these two are, really. I'd guess the later is a
follow-up on the metadef work but I don't see why that's needed.


This is a mess, and complete lack of focus on being what Glance was once good
at, a registry for images.


I wish I could tell you that you're wrong but I can't. You're very
much right and I very much agree with you. I don't believe, however,
that Glance should limit itself to being an image registry. There are
other features around images that are of great use for OpenStack
deployments. Image conversion, for instance.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpaC9Tqczv2r.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Specs tree re-organization

2015-08-13 Thread Jim Rollenhagen
Hi folks,

To align with our independent release model, we also re-organized our
specs tree. New specs should be proposed to the approved directory.
The proposal process is documented further in the README[0].

I've gone through existing specs proposals and placed a -1 on specs
proposing to the liberty directory. Please update them :)

Thanks!

// jim

[0] http://git.openstack.org/cgit/openstack/ironic-specs/tree/README.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] [Ceilometer] profiler sample resource id

2015-08-13 Thread Boris Pavlovic
Pradeep,


Actually this topic is more about osprofiler  ceilometer.

Overall it doesn't require this prefix.
However it is used in osproifler lib.
https://github.com/stackforge/osprofiler/blob/master/osprofiler/parsers/ceilometer.py#L129


Best regards,
Boris Pavlovic

On Thu, Aug 13, 2015 at 7:16 AM, Pradeep Kilambi pkila...@redhat.com
wrote:



 On Thu, Aug 13, 2015 at 8:50 AM, Roman Vasilets rvasil...@mirantis.com
 wrote:

 Hi,
Could you provide the link to this code?



 Here it is:


 https://github.com/openstack/ceilometer/blob/master/ceilometer/profiler/notifications.py#L76





 On Wed, Aug 12, 2015 at 9:22 PM, Pradeep Kilambi pkila...@redhat.com
 wrote:

 We're in the process of converting existing meters to use a more
 declarative approach where we add the meter definition as part of a yaml.
 As part of this transition there are few notification handlers where the id
 is not consistent. For example, in profiler notification Handler the
 resource_id is set to profiler-%s % message[payload][base_id] . Is
 there a reason we have the prefix? Can we ignore this and directly set
 to message[payload][base_id] ? Seems like there is no real need for the
 prefix here unless i'm missing something. Can we go ahead and drop this?

 If we don't hear anything i'll assume there is no objection to dropping
 this prefix.


 Thanks,

 --
 --
 Pradeep Kilambi; irc: prad
 OpenStack Engineering


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 --
 Pradeep Kilambi; irc: prad
 OpenStack Engineering

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] I have a question about openstack cinder zonemanager driver

2015-08-13 Thread Chenying (A)
   Hi,

Jay S. Bryant, Daniel Wilson .Thank you for your reply. Now I know that in this 
case I need two cinder nodes, one for Brocade fabric and one for Cisco fabric.

Do you consider that cinder zonemanager is  necessary to support multi-drivers 
from different vendors in one cinder.conf ?



Thanks,

ChenYing


发件人: Jay S. Bryant [mailto:jsbry...@electronicjungle.net]
发送时间: 2015年8月14日 0:36
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [cinder] I have a question about openstack cinder 
zonemanager driver.

Danny is correct.  You cannot have two different Zone Manager drivers 
configured for one volume process.

Jay

On 08/13/2015 11:00 AM, Daniel Wilson wrote:

I am fairly certain you cannot currently use two different FC switch zone 
drivers in one cinder.conf.  In this case it looks like you would need two 
cinder nodes, one for Brocade fabric and one for Cisco fabric.

Thanks,
Danny

On Thu, Aug 13, 2015 at 2:43 AM, Chenying (A) 
ying.c...@huawei.commailto:ying.c...@huawei.com wrote:
Hi, guys

 I am using Brocade FC switch in my OpenStack environment. I have a 
question about OpenStack cinder zonemanger driver.

I find that [fc-zone-manager] can only configure one zone driver. If I want to 
use two FC switches from different vendors at the same time.

One is Brocade FC switch, the other one is Cisco FC switch. Is there a method 
or solution configure two FC switch zone driver in one cinder.conf ?

I want them both to support Zone Manager.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] neutron-lbaas code structure problem

2015-08-13 Thread Gareth
Dear neutron guys.

[0]
https://github.com/openstack/neutron-lbaas/tree/master/neutron_lbaas/drivers
[1]
https://github.com/openstack/neutron-lbaas/tree/master/neutron_lbaas/services/loadbalancer/drivers

the codes under these two paths are 'same' (implement same things with v1
and v2), but why use two different code paths here?

-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]Implement the API tocreatemasterinstance and slave instances with one request

2015-08-13 Thread 陈迪豪
Thanks Doug.


We would like to know about the thoughts from community as well. And we will 
file a BP after we're using this API in our production environment. This may be 
needed for most trove users :)
 
-- Original --
From:  Doug Shelleyd...@tesora.com;
Date:  Thu, Aug 13, 2015 08:09 PM
To:  openstack-dev@lists.openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [trove]Implement the API tocreatemasterinstance 
and slave instances with one request

 
   Tobe,
  
 
  The BP for the feature that added replica_count is here - 
https://github.com/openstack/trove-specs/blob/master/specs/kilo/replication-v2.rst
  
 
 Your suggestion for changing the semantic of the API is interesting – I would 
interested to know what others in the community thought about this as well. 
Maybe you could file a BP and suggest this improvement?
 
 
 
 Regards,
 Doug
  
 
   From: 陈迪豪 chendi...@unitedstack.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Thursday, August 13, 2015 at 4:12 AM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [trove]Implement the API to createmasterinstance 
and slave instances with one request
 
 
 
   We have read the code of replica_count and it's like what I thought.
 
 
 We have an suggestion to extend this feature. When users set slave_of_id and 
replica_count at the same time, we just create replica instances. If they use 
replica_count without using slave_of_id, we should create  an master 
instance for them and some replica instances of it.
 
 
 For example, trove create $name --replica-of $id --replica_count=2 will 
create 2 replica instances. And trove create $name --replica_count=2 will 
create 1 master instance and 2 replica instances.
  
 
 What do you think Doug?
 
 
 Regards,
 tobe from UnitedStack
 
 
  -- Original --
  From:  陈迪豪chendi...@unitedstack.com;
 Date:  Thu, Aug 13, 2015 12:25 PM
 To:  openstack-dev@lists.openstack-dev@lists.openstack.org; 
 
 Subject:  Re: [openstack-dev] [trove]Implement the API to createmasterinstance 
and slave instances with one request
 
  
 Thanks Doug.
   
 It's really helpful and we need this feature as well. Can you point out the bp 
or patch of this?
 
 
 I think we will add --replica-count parameter within trove create request. 
So trove-api will create trove instance(aync create nova instance) and then 
create some replica trove instances(aync create nova instances). This is really 
useful for web front-end  developers to create master and replica instances in 
the same time(they don't want to send multiple requests by themselves).
 
 
 Regards,
 tobe from UnitedStack 
 
 
  -- Original --
  From:  Doug Shelleyd...@tesora.com;
 Date:  Wed, Aug 12, 2015 10:21 PM
 To:  openstack-dev@lists.openstack-dev@lists.openstack.org; 
 
 Subject:  Re: [openstack-dev] [trove]Implement the API to create 
masterinstance and slave instances with one request
 
  
   As of Kilo, you can add a —replica-count parameter to trove create 
—replica-of to have it spin up multiple mysql slaves simultaneously. This same 
construct is in the python/REST API as well. I realize that you still need to 
create a master first, but thought  I would point this out as it might be 
helpful to you.
 
 
 
 
 Regards,
 Doug
 
 
 
 
   From: 陈迪豪 chendi...@unitedstack.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Tuesday, August 11, 2015 at 11:45 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [trove]Implement the API to create master instance 
and slave instances with one request
 
 
 
   Now we can create mysql master instance and slave instance one by one.
 
 
 It would be much better to allow user to create one master instance and 
multiple slave instances with one request.
 
 
 Any suggestion about this, the API design or the implementation?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kuryr][magnum] Magnum/Kuryr Integration

2015-08-13 Thread Gal Sagie
Thanks Daneyon for raising the integration in your IRC meeting and for
starting the ether pad. (i hope to attend your next online meeting)
As requested i will start writing a detailed spec about Kuryr where
everyone can add and comment.
We already have a basic design document [1] describing the mapping model to
Neutron API. (thanks to Taku)

We haven't made any hard decisions design wise, we have our goals and
roadmap but we are at a learning period
where we want to learn and understand the use cases and missing parts in
Neutron and in Containers networking to address them.
I believe (and i am sure the rest of Kuryr team agrees) that Magnum use
cases and integration is top priority for us and we want
to learn and work together with you guys.

At this stage we are focusing on mapping Kuryr to Neutron for host
containers networking (and all the configuration
options needed for that) and building containerised Neutron plugins (form
existing plugins)
with Kuryr adding the missing parts (VIF-binding the container to the
networking infrastructure).

It is obvious that this same solution can be applied to nested VM, but as
you mentioned in the IRC talk
this has its overhead and we want to provide agent less solution which fits
Magnum use cases.

Its important to note that Neutron doesn't nessaccraily mean the OVS-L2
Agent, this is just one implementation of Neutron
and we already have Neutron implementations which support the use cases of
containers in nested VM's
(And i am sure more will come in the future like Midonet)

For example if we look at OVN (which has Neutron plugin): (and i have CC'ed
shettyg from VMware which works on that for corrections/additions)

We can configure container ports in OVN that are in a nested VM (with a
parent-port) and attach these ports
to any logical Neutron network we desire (which can be different from the
port of the VM), OVN in the host will make sure to apply
all the needed logic in the host and in the VM docker only need to attach
the container port to OVS bridge with the correct VLAN
(Which is dynamically allocated by Neutron/OVN plugin).

I tried to keep this description minimal here but you can read more about
it in my blog [2] and also i intend to describe
this in more detail in the spec.
We want to formalise  his part with Kuryr to fit other solutions as well
(and future solutions) and adjust Neutron missing parts, and i believe with
something like that Magnum
can leverage nested containers without the overhead of agent in the VM
(Magnum or Kuryr at this point will need to just provide the binding and
VLAN attachment in the VM)

Hope that make sense and lets continue iterating in IRC/over email and in
Kuryr spec which i will provide next week.

Feel free to share any thoughts/comments you have on this

[1] https://github.com/openstack/kuryr/blob/master/doc/source/design.rst
[2] http://galsagie.github.io/sdn/openstack/ovs/2015/04/26/ovn-containers/


On Thu, Aug 13, 2015 at 10:16 PM, Daneyon Hansen (danehans) 
daneh...@cisco.com wrote:


 The Magnum Networking Subteam just concluded our weekly meeting. Feel free
 to review the logs[1], as Kuryr integration was an agenda topic that drew
 considerable discussion. An etherpad[2] has been created to foster
 collaboration on the topic. Kuryr integration is scheduled as a topic for
 next week’s agenda. It would be a big help if the Kuryr team can review the
 etherpad and have representation during next week's meeting[3]. I look
 forward to our continued collaboration.

 [1]
 http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-08-13-18.00.log.txt
 [2] https://etherpad.openstack.org/p/magnum-kuryr
 [3]
 https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

 Regards,
 Daneyon Hansen

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Tricircle Liberty Midcycle Minisprint on Aug 17th 2015

2015-08-13 Thread Zhipeng Huang
Hi Team,

As discussed in the last meeting, we will hold a design sprint meeting on
next Monday starting from UTC 1200 to discuss exclusively design problems
and ideas. The meeting will happen at #openstack-tricircle so that we could
discuss as long as we are awake without time limits :P

The meeting info could be found at
https://wiki.openstack.org/wiki/Meetings/Tricircle, please reply any topic
you want to add to the agenda

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard  Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]password for registry v2

2015-08-13 Thread 王华
Hi hongbin,
I have comments in line.

Thank you.

Regards,
Wanghua

On Fri, Aug 14, 2015 at 6:20 AM, Hongbin Lu hongbin...@huawei.com wrote:

 Hi Wanghua,



 For the question about how to pass user password to bay nodes, there are
 several options here:

 1.   Directly inject the password to bay nodes via cloud-init. This
 might be the simplest solution. I am not sure if it is OK in security
 aspect.

 2.   Inject a scoped Keystone trust to bay nodes and use it to fetch
 user password from Barbican (suggested by Adrian).

If we use trust, who we should let user trust?  If we let user trust
magnum, then the credential of magnum will occur in vm. I think it is
insecure.

 3.   Leverage the solution proposed by Kevin Fox [1]. This might be a
 long-term solution.



 For the security concerns about storing credential in a config file, I
 need clarification. What is the config file? Is it a dokcer registry v2
 config file? I guess the credential stored there will be used to talk to
 swift. Is that correct? In general, it is

The credential stored in docker registry v2 config file is used to talk to
swift.


 insecure to store user credential inside a VM, because anyone can take a
 snapshot of the VM and boot another VM from the snapshot. Maybe storing a
 scoped credential in the config file could mitigate the security risk. Not
 sure if there is a better solution.



 [1] https://review.openstack.org/#/c/186617/



 Best regards,

 Hongbin



 *From:* 王华 [mailto:wanghua.hum...@gmail.com]
 *Sent:* August-13-15 4:06 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [magnum]password for registry v2



 Hi all,



 In order to add registry v2 to bay nodes[1], authentication information is
 needed for the registry to upload and download files from swift. The swift
 storage-driver in registry now needs the parameters as described in [2].
 User password is needed. How can we get the password?



 1. Let user pass password in baymodel-create.

 2. Use user token to get password from keystone



 Is it suitable to store user password in db?



 It may be insecure to store password in db and expose it to user in a
 config file even if the password is encrypted. Heat store user password in
 db before, and now change to keystone trust[3]. But if we use keystone
 trust, the swift storage-driver does not support it. If we use trust, we
 expose magnum user's credential in a config file, which is also insecure.



 Is there a secure way to implement this bp?



 [1] https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master

 [2]
 https://github.com/docker/distribution/blob/master/docs/storage-drivers/swift.md

 [3] https://wiki.openstack.org/wiki/Keystone/Trusts



 Regards,

 Wanghua

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Guo, Ruijing
If you can commit it to devstack, it will benefit everyone

-Original Message-
From: Tony Breeds [mailto:t...@bakeyournoodle.com] 
Sent: Friday, August 14, 2015 12:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [devstack] Restart openstack service

On Fri, Aug 14, 2015 at 04:01:20AM +, Guo, Ruijing wrote:
 Yes. I like this idea to restart all services including nova, neutron, 
 cinder, etc:)

You can *probably* use 

HOST=devstack.domain ./stack-smash.sh '.*'

to restart all the services running under devstack.

Note my list of windows is taken from my typical run and isn't comprehensive

Yours Tony.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]password for registry v2

2015-08-13 Thread 王华
Will the scoped swift trust token time out?

Regards,
Wanghua

On Fri, Aug 14, 2015 at 10:11 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

 Keystone v3 trusts can be scoped to specific actions. In this case the
 node needs a valid auth token to use swift. The token can be a trust token.
 We should generate a trust token scoped to swift for a given user (project)
 and tenant. It can use a system domain account that has a role that allows
 it to CRUD objects in a particular swift storage container. Then the
 registry v2 software can use the swift trust token to access swift, but not
 other OpenStack services. Depending on the trust enforcement available in
 swift, it may even be possible to prevent creation of new swift storage
 containers. We could check to be sure.

 The scoped swift trust token can be injected into the bay master at
 creation time using cloud-init.

 --
 Adrian

 On Aug 13, 2015, at 6:46 PM, 王华 wanghua.hum...@gmail.com wrote:

 Hi hongbin,
 I have comments in line.

 Thank you.

 Regards,
 Wanghua

 On Fri, Aug 14, 2015 at 6:20 AM, Hongbin Lu hongbin...@huawei.com wrote:

 Hi Wanghua,



 For the question about how to pass user password to bay nodes, there are
 several options here:

 1.   Directly inject the password to bay nodes via cloud-init. This
 might be the simplest solution. I am not sure if it is OK in security
 aspect.

 2.   Inject a scoped Keystone trust to bay nodes and use it to fetch
 user password from Barbican (suggested by Adrian).

 If we use trust, who we should let user trust?  If we let user trust
 magnum, then the credential of magnum will occur in vm. I think it is
 insecure.

 3.   Leverage the solution proposed by Kevin Fox [1]. This might be
 a long-term solution.



 For the security concerns about storing credential in a config file, I
 need clarification. What is the config file? Is it a dokcer registry v2
 config file? I guess the credential stored there will be used to talk to
 swift. Is that correct? In general, it is

 The credential stored in docker registry v2 config file is used to talk to
 swift.


 insecure to store user credential inside a VM, because anyone can take a
 snapshot of the VM and boot another VM from the snapshot. Maybe storing a
 scoped credential in the config file could mitigate the security risk. Not
 sure if there is a better solution.



 [1] https://review.openstack.org/#/c/186617/



 Best regards,

 Hongbin



 *From:* 王华 [mailto:wanghua.hum...@gmail.com]
 *Sent:* August-13-15 4:06 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [magnum]password for registry v2



 Hi all,



 In order to add registry v2 to bay nodes[1], authentication information
 is needed for the registry to upload and download files from swift. The
 swift storage-driver in registry now needs the parameters as described in
 [2]. User password is needed. How can we get the password?



 1. Let user pass password in baymodel-create.

 2. Use user token to get password from keystone



 Is it suitable to store user password in db?



 It may be insecure to store password in db and expose it to user in a
 config file even if the password is encrypted. Heat store user password in
 db before, and now change to keystone trust[3]. But if we use keystone
 trust, the swift storage-driver does not support it. If we use trust, we
 expose magnum user's credential in a config file, which is also insecure.



 Is there a secure way to implement this bp?



 [1] https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master

 [2]
 https://github.com/docker/distribution/blob/master/docs/storage-drivers/swift.md

 [3] https://wiki.openstack.org/wiki/Keystone/Trusts



 Regards,

 Wanghua

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Confused syntax error when inserting rule.

2015-08-13 Thread Rui Chen
Hi folks:

I face a problem when I insert a rule into Congress. I want to find out
all of the volumes that are not available status, so I draft a rule like
this:

error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
status=available)

But when I create the rule, a error is raised:

(openstack) congress policy rule create chenrui_p error(id) :-
cinder:volumes(id=id),not cinder:volumes(id=id, status=\available\)
ERROR: openstack Syntax error for rule::Errors: Could not reorder rule
error(id) :- cinder:volumes(id, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8), not cinder:volumes(id, _x_1_1, _x_1_2,
available, _x_1_4, _x_1_5, _x_1_6, _x_1_7, _x_1_8).  Unsafe lits: not
cinder:volumes(id, _x_1_1, _x_1_2, available, _x_1_4, _x_1_5, _x_1_6,
_x_1_7, _x_1_8) (vars set(['_x_1_2', '_x_1_1', '_x_1_6', '_x_1_7',
'_x_1_4', '_x_1_5', '_x_1_8'])) (HTTP 400) (Request-ID:
req-1f4432d6-f869-472b-aa7d-4cf78dd96fa1)

I check the Congress policy docs [1], looks like that the rule don't
break any syntax restrictions.

If I modify the rule like this, it works:

(openstack) congress policy rule create chenrui_p error(x) :-
cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5, _x_0_6, _x_0_7,
_x_0_8),not cinder:volumes(x, _x_0_1, _x_0_2, \available\,_x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8)
+-++
| Field   | Value
   |
+-++
| comment | None
|
| id  | ad121e09-ba0a-45d6-bd18-487d975d5bf5
|
| name| None
|
| rule| error(x) :-
   |
| | cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8), |
| | not cinder:volumes(x, _x_0_1, _x_0_2, available, _x_0_4,
_x_0_5, _x_0_6, _x_0_7, _x_0_8) |
+-++

I'm not sure this is a bug or I miss something from docs, so I need
some feedback from mail list.
Feel free to discuss about it.


[1]:
http://congress.readthedocs.org/en/latest/policy.html#datalog-syntax-restrictions


Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Rui Chen
I use *screen* in devstack, Ctrl+c kill services, then restart it in
console.

Please try the following cmd in your devstack environment, and read some
docs.

*screen -r stack*

http://www.ibm.com/developerworks/cn/linux/l-cn-screen/



2015-08-14 11:20 GMT+08:00 Guo, Ruijing ruijing@intel.com:

 It is very useful to restart openstack services in devstack so that we
 don’t need to unstack and stack again.



 How much effort to support restarting openstack? Anyone is interested in
 that?



 Thanks,

 -Ruijing

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Tony Breeds
On Fri, Aug 14, 2015 at 04:00:23AM +, Guo, Ruijing wrote:
 I need to reboot hosts and restart openstack services. In this case, screen 
 may not help.

If you need to reboot the host then you shoudl re-run ./stack.sh

Yours Tony.


pgpVjQSWy881w.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kuryr][magnum] Design Specification for Kuryr

2015-08-13 Thread feisky
Will Kuryr supports docker's port-mapping?



--
View this message in context: 
http://openstack.10931.n7.nabble.com/neutron-kuryr-magnum-Design-Specification-for-Kuryr-tp82256p82299.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]password for registry v2

2015-08-13 Thread Adrian Otto
You can specify the timeout when you create it, so it is possible to make one 
that effectively has no expiry.

--
Adrian

On Aug 13, 2015, at 7:36 PM, 王华 
wanghua.hum...@gmail.commailto:wanghua.hum...@gmail.com wrote:

Will the scoped swift trust token time out?

Regards,
Wanghua

On Fri, Aug 14, 2015 at 10:11 AM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Keystone v3 trusts can be scoped to specific actions. In this case the node 
needs a valid auth token to use swift. The token can be a trust token. We 
should generate a trust token scoped to swift for a given user (project) and 
tenant. It can use a system domain account that has a role that allows it to 
CRUD objects in a particular swift storage container. Then the registry v2 
software can use the swift trust token to access swift, but not other OpenStack 
services. Depending on the trust enforcement available in swift, it may even be 
possible to prevent creation of new swift storage containers. We could check to 
be sure.

The scoped swift trust token can be injected into the bay master at creation 
time using cloud-init.

--
Adrian

On Aug 13, 2015, at 6:46 PM, 王华 
wanghua.hum...@gmail.commailto:wanghua.hum...@gmail.com wrote:

Hi hongbin,
I have comments in line.

Thank you.

Regards,
Wanghua

On Fri, Aug 14, 2015 at 6:20 AM, Hongbin Lu 
hongbin...@huawei.commailto:hongbin...@huawei.com wrote:
Hi Wanghua,

For the question about how to pass user password to bay nodes, there are 
several options here:

1.   Directly inject the password to bay nodes via cloud-init. This might 
be the simplest solution. I am not sure if it is OK in security aspect.

2.   Inject a scoped Keystone trust to bay nodes and use it to fetch user 
password from Barbican (suggested by Adrian).

If we use trust, who we should let user trust?  If we let user trust magnum, 
then the credential of magnum will occur in vm. I think it is insecure.

3.   Leverage the solution proposed by Kevin Fox [1]. This might be a 
long-term solution.

For the security concerns about storing credential in a config file, I need 
clarification. What is the config file? Is it a dokcer registry v2 config file? 
I guess the credential stored there will be used to talk to swift. Is that 
correct? In general, it is
The credential stored in docker registry v2 config file is used to talk to 
swift.

insecure to store user credential inside a VM, because anyone can take a 
snapshot of the VM and boot another VM from the snapshot. Maybe storing a 
scoped credential in the config file could mitigate the security risk. Not sure 
if there is a better solution.

[1] https://review.openstack.org/#/c/186617/

Best regards,
Hongbin

From: 王华 [mailto:wanghua.hum...@gmail.commailto:wanghua.hum...@gmail.com]
Sent: August-13-15 4:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum]password for registry v2

Hi all,

In order to add registry v2 to bay nodes[1], authentication information is 
needed for the registry to upload and download files from swift. The swift 
storage-driver in registry now needs the parameters as described in [2]. User 
password is needed. How can we get the password?

1. Let user pass password in baymodel-create.
2. Use user token to get password from keystone

Is it suitable to store user password in db?

It may be insecure to store password in db and expose it to user in a config 
file even if the password is encrypted. Heat store user password in db before, 
and now change to keystone trust[3]. But if we use keystone trust, the swift 
storage-driver does not support it. If we use trust, we expose magnum user's 
credential in a config file, which is also insecure.

Is there a secure way to implement this bp?

[1] https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master
[2] 
https://github.com/docker/distribution/blob/master/docs/storage-drivers/swift.md
[3] https://wiki.openstack.org/wiki/Keystone/Trusts

Regards,
Wanghua

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Congress] Confused syntax error when inserting rule.

2015-08-13 Thread Rui Chen
Sorry, send the same mail again, please comments at here, the other mail
lack title.

2015-08-14 11:03 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 Hi folks:

 I face a problem when I insert a rule into Congress. I want to find
 out all of the volumes that are not available status, so I draft a rule
 like this:

 error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
 status=available)

 But when I create the rule, a error is raised:

 (openstack) congress policy rule create chenrui_p error(id) :-
 cinder:volumes(id=id),not cinder:volumes(id=id, status=\available\)
 ERROR: openstack Syntax error for rule::Errors: Could not reorder rule
 error(id) :- cinder:volumes(id, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8), not cinder:volumes(id, _x_1_1, _x_1_2,
 available, _x_1_4, _x_1_5, _x_1_6, _x_1_7, _x_1_8).  Unsafe lits: not
 cinder:volumes(id, _x_1_1, _x_1_2, available, _x_1_4, _x_1_5, _x_1_6,
 _x_1_7, _x_1_8) (vars set(['_x_1_2', '_x_1_1', '_x_1_6', '_x_1_7',
 '_x_1_4', '_x_1_5', '_x_1_8'])) (HTTP 400) (Request-ID:
 req-1f4432d6-f869-472b-aa7d-4cf78dd96fa1)

 I check the Congress policy docs [1], looks like that the rule don't
 break any syntax restrictions.

 If I modify the rule like this, it works:

 (openstack) congress policy rule create chenrui_p error(x) :-
 cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5, _x_0_6, _x_0_7,
 _x_0_8),not cinder:volumes(x, _x_0_1, _x_0_2, \available\,_x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8)

 +-++
 | Field   | Value
  |

 +-++
 | comment | None
 |
 | id  | ad121e09-ba0a-45d6-bd18-487d975d5bf5
 |
 | name| None
 |
 | rule| error(x) :-
  |
 | | cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8), |
 | | not cinder:volumes(x, _x_0_1, _x_0_2, available, _x_0_4,
 _x_0_5, _x_0_6, _x_0_7, _x_0_8) |

 +-++

 I'm not sure this is a bug or I miss something from docs, so I need
 some feedback from mail list.
 Feel free to discuss about it.


 [1]:
 http://congress.readthedocs.org/en/latest/policy.html#datalog-syntax-restrictions


 Best Regards.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Tony Breeds
On Fri, Aug 14, 2015 at 11:31:07AM +0800, Rui Chen wrote:
 I use *screen* in devstack, Ctrl+c kill services, then restart it in
 console.
 
 Please try the following cmd in your devstack environment, and read some
 docs.
 
 *screen -r stack*
 
 http://www.ibm.com/developerworks/cn/linux/l-cn-screen/

It's not baked into devstack but I have a script called 'stack-smash.sh' which 
I run like

HOST=devstack.domain ./stack-smash.sh nova

to restart all the nova services in a running devstack.

---
#!/opt/local/bin/bash

if [ -z $1 ] ; then
set -- nova
fi

if [ -z $HOST ] ; then
echo HOST= $0 $@ 2
exit 1
fi

for service in $@ ; do
pattern=''

case $service in
nova)   pattern=^n- ;;
glance) pattern=^g- ;;
cinder) pattern=^c- ;;
keystone)   pattern=^key;;
*)  pattern=$service;;
esac

for win in  key key-access g-reg g-api n-api n-cond n-crt n-net \
n-sch n-novnc n-cauth n-sproxy n-cpu c-api c-sch c-vol ; do

[ -z $pattern ]  continue

if [[ $win =~ $pattern ]] ; then
echo -n Killing window=$win for service=$service
ssh $HOST -qt screen -S stack -p $win -X stuff '' # this is a 
literal control-C
sleep 1s
ssh $HOST -qt screen -S stack -p $win -X stuff '!!\\n'
sleep 1s
echo  ... done.

fi
done
done
---

Yours Tony.


pgpolqFVFf_VK.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Guo, Ruijing
Yes. I like this idea to restart all services including nova, neutron, cinder, 
etc:)

-Original Message-
From: Tony Breeds [mailto:t...@bakeyournoodle.com] 
Sent: Friday, August 14, 2015 11:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [devstack] Restart openstack service

On Fri, Aug 14, 2015 at 11:31:07AM +0800, Rui Chen wrote:
 I use *screen* in devstack, Ctrl+c kill services, then restart it in 
 console.
 
 Please try the following cmd in your devstack environment, and read 
 some docs.
 
 *screen -r stack*
 
 http://www.ibm.com/developerworks/cn/linux/l-cn-screen/

It's not baked into devstack but I have a script called 'stack-smash.sh' which 
I run like

HOST=devstack.domain ./stack-smash.sh nova

to restart all the nova services in a running devstack.

---
#!/opt/local/bin/bash

if [ -z $1 ] ; then
set -- nova
fi

if [ -z $HOST ] ; then
echo HOST= $0 $@ 2
exit 1
fi

for service in $@ ; do
pattern=''

case $service in
nova)   pattern=^n- ;;
glance) pattern=^g- ;;
cinder) pattern=^c- ;;
keystone)   pattern=^key;;
*)  pattern=$service;;
esac

for win in  key key-access g-reg g-api n-api n-cond n-crt n-net \
n-sch n-novnc n-cauth n-sproxy n-cpu c-api c-sch c-vol ; do

[ -z $pattern ]  continue

if [[ $win =~ $pattern ]] ; then
echo -n Killing window=$win for service=$service
ssh $HOST -qt screen -S stack -p $win -X stuff '' # this is a 
literal control-C
sleep 1s
ssh $HOST -qt screen -S stack -p $win -X stuff '!!\\n'
sleep 1s
echo  ... done.

fi
done
done
---

Yours Tony.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Guo, Ruijing
I need to reboot hosts and restart openstack services. In this case, screen may 
not help.


From: Rui Chen [mailto:chenrui.m...@gmail.com]
Sent: Friday, August 14, 2015 11:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [devstack] Restart openstack service

I use screen in devstack, Ctrl+c kill services, then restart it in console.

Please try the following cmd in your devstack environment, and read some docs.

screen -r stack

http://www.ibm.com/developerworks/cn/linux/l-cn-screen/



2015-08-14 11:20 GMT+08:00 Guo, Ruijing 
ruijing@intel.commailto:ruijing@intel.com:
It is very useful to restart openstack services in devstack so that we don’t 
need to unstack and stack again.

How much effort to support restarting openstack? Anyone is interested in that?

Thanks,
-Ruijing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Tony Breeds
On Fri, Aug 14, 2015 at 04:01:20AM +, Guo, Ruijing wrote:
 Yes. I like this idea to restart all services including nova, neutron, 
 cinder, etc:)

You can *probably* use 

HOST=devstack.domain ./stack-smash.sh '.*'

to restart all the services running under devstack.

Note my list of windows is taken from my typical run and isn't comprehensive

Yours Tony.


pgp4erQl3iC0C.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [third-party]RE: [OpenStack-Infra] ProphetStor CI account

2015-08-13 Thread Rick Chen
HI Mike:
Sorry again, I already add email alert agent in our CI Jenkins
server to capture each failed build result.
[1] -
http://lists.openstack.org/pipermail/third-party-announce/2015-June/000192.h
tml
[2] -
http://lists.openstack.org/pipermail/third-party-announce/2015-June/000193.h
tml
Solution 1: Our Jenkins client machine is vmware machine, I already
add snapshot revert script in the Jenkins Job script. Each git review job
trigger the client will revert toclean environment
to solve this problem.
Solution 2: I don't know whiched changed to make keystone unable to
import pastedeploy. So I try to uninstall python-pastedeploy package in the
local to solve some 
 issue about unable to build devstack issue.
Sorry again to disturb you.

-Original Message-
From: Mike Perez [mailto:thin...@gmail.com] 
Sent: Friday, August 14, 2015 8:09 AM
To: Rick Chen rick.c...@prophetstor.com
Cc: openstack-in...@lists.openstack.org
Subject: Re: [OpenStack-Infra] ProphetStor CI account

On 23:21 Aug 13, Rick Chen wrote:
 Dear Infra team,
 
  
 
 I already fixed my CI machine about failed to build up the devstack issue.
 Can you re-enable my CI gerrit account?
 
 Account : prophetstor-ci
 
 Account email: prophetstor...@prophetstor.com 
 mailto:prophetstor...@prophetstor.com
 
  
 
 Latest CI verify report:
 
 http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophets
 tor-ds vm-tempest-cinder-ci/5043/console.html

Hi Rick,

This is the second time [1] I raised a problem with the Prophetstor CI. You
said you would then fix things [2], but my email went unaswered [3].

I will fix it is not going to cut it this time. I want actual answers on
how you're fixing this so we can stop this dance. Also what are you doing
for monitoring, instead of me having to notify you?

[1] -
http://lists.openstack.org/pipermail/third-party-announce/2015-June/000192.h
tml
[2] -
http://lists.openstack.org/pipermail/third-party-announce/2015-June/000193.h
tml
[3] -
http://lists.openstack.org/pipermail/third-party-announce/2015-June/000219.h
tml

--
Mike Perez


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]password for registry v2

2015-08-13 Thread Adrian Otto
Keystone v3 trusts can be scoped to specific actions. In this case the node 
needs a valid auth token to use swift. The token can be a trust token. We 
should generate a trust token scoped to swift for a given user (project) and 
tenant. It can use a system domain account that has a role that allows it to 
CRUD objects in a particular swift storage container. Then the registry v2 
software can use the swift trust token to access swift, but not other OpenStack 
services. Depending on the trust enforcement available in swift, it may even be 
possible to prevent creation of new swift storage containers. We could check to 
be sure.

The scoped swift trust token can be injected into the bay master at creation 
time using cloud-init.

--
Adrian

On Aug 13, 2015, at 6:46 PM, ?? 
wanghua.hum...@gmail.commailto:wanghua.hum...@gmail.com wrote:

Hi hongbin,
I have comments in line.

Thank you.

Regards,
Wanghua

On Fri, Aug 14, 2015 at 6:20 AM, Hongbin Lu 
hongbin...@huawei.commailto:hongbin...@huawei.com wrote:
Hi Wanghua,

For the question about how to pass user password to bay nodes, there are 
several options here:

1.   Directly inject the password to bay nodes via cloud-init. This might 
be the simplest solution. I am not sure if it is OK in security aspect.

2.   Inject a scoped Keystone trust to bay nodes and use it to fetch user 
password from Barbican (suggested by Adrian).

If we use trust, who we should let user trust?  If we let user trust magnum, 
then the credential of magnum will occur in vm. I think it is insecure.

3.   Leverage the solution proposed by Kevin Fox [1]. This might be a 
long-term solution.

For the security concerns about storing credential in a config file, I need 
clarification. What is the config file? Is it a dokcer registry v2 config file? 
I guess the credential stored there will be used to talk to swift. Is that 
correct? In general, it is
The credential stored in docker registry v2 config file is used to talk to 
swift.

insecure to store user credential inside a VM, because anyone can take a 
snapshot of the VM and boot another VM from the snapshot. Maybe storing a 
scoped credential in the config file could mitigate the security risk. Not sure 
if there is a better solution.

[1] https://review.openstack.org/#/c/186617/

Best regards,
Hongbin

From: ?? [mailto:wanghua.hum...@gmail.commailto:wanghua.hum...@gmail.com]
Sent: August-13-15 4:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum]password for registry v2

Hi all,

In order to add registry v2 to bay nodes[1], authentication information is 
needed for the registry to upload and download files from swift. The swift 
storage-driver in registry now needs the parameters as described in [2]. User 
password is needed. How can we get the password?

1. Let user pass password in baymodel-create.
2. Use user token to get password from keystone

Is it suitable to store user password in db?

It may be insecure to store password in db and expose it to user in a config 
file even if the password is encrypted. Heat store user password in db before, 
and now change to keystone trust[3]. But if we use keystone trust, the swift 
storage-driver does not support it. If we use trust, we expose magnum user's 
credential in a config file, which is also insecure.

Is there a secure way to implement this bp?

[1] https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master
[2] 
https://github.com/docker/distribution/blob/master/docs/storage-drivers/swift.md
[3] https://wiki.openstack.org/wiki/Keystone/Trusts

Regards,
Wanghua

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress]

2015-08-13 Thread Rui Chen
Hi folks:

I face a problem when I insert a rule into Congress. I want to find out
all of the volumes that are not available status, so I draft a rule like
this:

error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
status=available)

But when I create the rule, a error is raised:

(openstack) congress policy rule create chenrui_p error(id) :-
cinder:volumes(id=id),not cinder:volumes(id=id, status=\available\)
ERROR: openstack Syntax error for rule::Errors: Could not reorder rule
error(id) :- cinder:volumes(id, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8), not cinder:volumes(id, _x_1_1, _x_1_2,
available, _x_1_4, _x_1_5, _x_1_6, _x_1_7, _x_1_8).  Unsafe lits: not
cinder:volumes(id, _x_1_1, _x_1_2, available, _x_1_4, _x_1_5, _x_1_6,
_x_1_7, _x_1_8) (vars set(['_x_1_2', '_x_1_1', '_x_1_6', '_x_1_7',
'_x_1_4', '_x_1_5', '_x_1_8'])) (HTTP 400) (Request-ID:
req-1f4432d6-f869-472b-aa7d-4cf78dd96fa1)

I check the Congress policy docs [1], looks like that the rule don't
break any syntax restrictions.

If I modify the rule like this, it works:

(openstack) congress policy rule create chenrui_p error(x) :-
cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5, _x_0_6, _x_0_7,
_x_0_8),not cinder:volumes(x, _x_0_1, _x_0_2, \available\,_x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8)
+-++
| Field   | Value
   |
+-++
| comment | None
|
| id  | ad121e09-ba0a-45d6-bd18-487d975d5bf5
|
| name| None
|
| rule| error(x) :-
   |
| | cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8), |
| | not cinder:volumes(x, _x_0_1, _x_0_2, available, _x_0_4,
_x_0_5, _x_0_6, _x_0_7, _x_0_8) |
+-++

I'm not sure this is a bug or I miss something from docs, so I need
some feedback from mail list.
Feel free to discuss about it.


[1]:
http://congress.readthedocs.org/en/latest/policy.html#datalog-syntax-restrictions


Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Guo, Ruijing
It is very useful to restart openstack services in devstack so that we don’t 
need to unstack and stack again.

How much effort to support restarting openstack? Anyone is interested in that?

Thanks,
-Ruijing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >