Re: [openstack-dev] [Zun][Glare][Glance] Building Docker images

2016-12-16 Thread Kairat Kushaev
Hello Denis,
unfortunately, I don't have deep knowledge of Zun so i can speak from Glare
side only.
So Glare can serve as some kind of artifact storage for container files but
we need to define artifact structure first.
Please note that artifact is immutable after activation so once you need to
change some files you need to create new artifact.
Also there is possibility to store container images in Glare also but this
requires an integration from Zun to consume blobs from Glare.
So it requires some improvements outside Glare.

Best regards,
Kairat Kushaev

On Tue, Dec 13, 2016 at 8:16 PM, Hongbin Lu <hongbin...@huawei.com> wrote:

> Denis,
>
>
>
> Per my understanding, container image building is out of the scope of the
> Zun project. Zun assumes an image has been built and uploaded to a image
> repository (i.e. Glance, docker registry), then the image will be pulled
> down from the repo to host. However, feel free to let us know if anything
> else we can do.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Denis Makogon [mailto:lildee1...@gmail.com]
> *Sent:* December-12-16 4:51 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Zun][Glare][Glance] Building Docker images
>
>
>
> Hello to All.
>
>
>
> I’d like to get initial feedback on the idea of building Docker images
> through Zun involving Glare as artifactory for all static components
> required for image.
>
>   So, the idea here is in being capable to build a Docker image
> through Zun API with storing all static data required for docker image
> building in Glare or Swift. In order to keep the same UX from using Docker
> it would be better to use Dockerfile as description format for image
> building.
>
>   In image creation process Glare could take role of artifactory,
> where users stores, let’s say source code of his applications that would
> run in containers, static data, etc. And those artifacts will be pulled
> during image creation and used to inject into image (similar process of
> context creation during Docker image building using native CLI). Please
> note that artifacts are completely optional for images, but would give a
> capability to keep artifacts in dedicated storage instead of transferring
> all data through Zun API (the opposite concept to Docker build context).
>
>
>
>   Once image is created, it can be stored in underlying Docker in
> Zun or can be published into Glance or Swift for further consumption (if
> user will need to save image, he’ll use Glance image download API). I’ve
> mentioned Swift VS Glance because Swift has concept of temp URLs that can
> be accessed without being authorized. Such feature allows to use Swift as
> storage from where possible to export image to Docker using Import API [1].
>
>
>
>
>
> Any feedback on the idea is appreciated.
>
>
>
> Kind regards,
>
> Denis Makogon
>
>
>
> [1] https://docs.docker.com/engine/reference/commandline/import/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][Heat][Murano][Tosca] Review of artifact structure for specific projects in Glare

2016-09-06 Thread Kairat Kushaev
Sorry, I posted the wrong line for TOSCA guys.
The correct link to TOSCA artifact here: https://review.openstack.org/#
/c/365631/. <https://review.openstack.org/#/c/365631/>

Best regards,
Kairat Kushaev

On Mon, Sep 5, 2016 at 4:18 PM, Kairat Kushaev <kkush...@mirantis.com>
wrote:

> TL;DR: Heat, Murano, TOSCA folks please review artifact structure
> proposed for Glare.
>
> Hi all,
> as you may know Glare is a project that provides binary artifacts for OS
> cloud.
> Glare allows developers to specify artifact structure that can be consumed
> by other services like
> Murano, Heat, TOSCA and others.
> One of highest priorities for our project is migration app-catalog to
> Glare v1. In order to do that we
> prepared py-files that defines artifact types for different artifact
> presented in app-catalog:
> Heat Templates, TOSCA templates, Murano packages.
> These artifact types define structure of the artifacts that can be
> consumed from Openstack App Catalog. We assume that artifacts hold the same
> structure when user deploys Glare locally. So in order to meet all
> requirements for future Glare use we need a feedback from appropriate teams
> on artifact structure (by artifact structure I mean list of attributes
> specified for each artifact type).
> It helps us to understand what we missed in artifact definition and
> address all requirements that different project may have(so integration
> with glare will be easier).
>
> Here is references to patches where we define a structure:
> Heat template and Heat environment(https://review.
> openstack.org/#/c/365629/)
> Murano Package(https://review.openstack.org/#/c/365630/)
> TOSCA template(https://review.openstack.org/#/c/365632/)
>
> Best regards,
> Kairat Kushaev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glare][Heat][Murano][Tosca] Review of artifact structure for specific projects in Glare

2016-09-05 Thread Kairat Kushaev
TL;DR: Heat, Murano, TOSCA folks please review artifact structure proposed
for Glare.

Hi all,
as you may know Glare is a project that provides binary artifacts for OS
cloud.
Glare allows developers to specify artifact structure that can be consumed
by other services like
Murano, Heat, TOSCA and others.
One of highest priorities for our project is migration app-catalog to Glare
v1. In order to do that we
prepared py-files that defines artifact types for different artifact
presented in app-catalog:
Heat Templates, TOSCA templates, Murano packages.
These artifact types define structure of the artifacts that can be consumed
from Openstack App Catalog. We assume that artifacts hold the same
structure when user deploys Glare locally. So in order to meet all
requirements for future Glare use we need a feedback from appropriate teams
on artifact structure (by artifact structure I mean list of attributes
specified for each artifact type).
It helps us to understand what we missed in artifact definition and address
all requirements that different project may have(so integration with glare
will be easier).

Here is references to patches where we define a structure:
Heat template and Heat environment(https://review.openstack.org/#/c/365629/)
Murano Package(https://review.openstack.org/#/c/365630/)
TOSCA template(https://review.openstack.org/#/c/365632/)

Best regards,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mod_wsgi: what services are people using with it?

2016-08-22 Thread Kairat Kushaev
Just small note about mod_wsgi for Glance,
You need mod_wsgi v.4.4.0 or older to work with Glance in daemon mode.
Otherwise, glance should fail when uploading images.

Best regards,
Kairat Kushaev

On Sat, Aug 20, 2016 at 4:00 AM, Nick Papadonis <npapado...@gmail.com>
wrote:

>
>
> On Fri, Aug 19, 2016 at 5:34 PM, John Dickinson <m...@not.mn> wrote:
>
>>
>>
>> On 17 Aug 2016, at 15:27, Nick Papadonis wrote:
>>
>> > comments
>> >
>> > On Wed, Aug 17, 2016 at 4:53 PM, Matthew Thode <
>> prometheanf...@gentoo.org>
>> > wrote:
>> >
>> >> On 08/17/2016 03:52 PM, Nick Papadonis wrote:
>> >>
>> >>> Thanks for the quick response!
>> >>>
>> >>> Glance worked for me in Mitaka.  I had to specify 'chunked transfers'
>> >>> and increase the size limit to 5GB.  I had to pull some of the WSGI
>> >>> source from glance and alter it slightly to call from Apache.
>> >>>
>> >>> I saw that Nova claims mod_wsgi is 'experimental'.  Interested in it's
>> >>> really experimental or folks use it in production.
>> >>>
>> >>> Nick
>> >>
>> >> ya, cinder is experimental too (at least in my usage) as I'm using
>> >> python3 as well :D  For me it's a case of having to test the packages I
>> >> build.
>> >>
>> >>
>> > I converted Cinder to mod_wsgi because from what I recall, I found that
>> SSL
>> > support was removed from the Eventlet server.  Swift endpoint outputs a
>> log
>> > warning that Eventlet SSL is only for testing purposes, which is another
>> > reason why I turned to mod_wsgi for that.
>>
>> FWIW, most prod Swift deployments I know of use HAProxy or stud to
>> terminate TLS before forwarding the http stream to a proxy endpoint (local
>> or remote). Especially when combined with a server that has AES-NI, this
>> gives good performance.
>
>
> Thanks.  I'd be interested if anyone has done a performance comparison of
> HAProxy vs mod_wsgi to terminate.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Glare] External locations design

2016-08-01 Thread Kairat Kushaev
Hello all,

I would like to start to describe some design decisions we made in Glare
code (https://review.openstack.org/#/q/topic:bp/glare-api+status:open).  If
you are not familiar with Glare I suggest you to read the following spec:

https://github.com/openstack/glance-specs/blob/master/specs/newton/approved/glance/glare-api.rst

I hope it will help other folks to understand Glare approach and provide
some constructive feedback for Glare. I think that we can also use Glare
solution for Glance in near future to address some drawbacks we have in
Glance.

Glare locations

Glance and Glare have possibility to set some external url as
image(artifact) location. This feature is quite useful for users who would
like to refer to some external image or artifact (for example, Fedora image
on official Fedora site) and not to store this image or artifact in the
cloud.

External locations in Glance have several specialities:

   1.

   It is possible to setup multiple locations for an image. Glance uses
   special location strategy to define which location to use. This strategy
   defined in glance codebase and can be configured in glance conf.
   2.

   Glance doesn’t differ image locations specified by url and image
   locations uploaded to Glance backend. Glance has some restrictions about
   which urls to use for locations (see Glance docs for more info).


Glare external locations designed in different way to address some
drawbacks we have in Glance. So the approach is the following:

   1.

   Glare doesn’t support multiple locations, you can specify dict of blobs
   in artifact type and add url for each blob in dict. User must define a
   name(f.e. region name or priority) for blob in dict and this name can be
   used to retrieve this blob from artifact. So decision about which location
   to use will be outside of Glare.
   2.

   Glare adds a special flag to database for external locations. So they
   will be treated differently in Glare when delete artifact. If blob value is
   external url then we don’t need to pass this url to backend and just delete
   the record in DB. For now, Glare allows only http(s) locations set but it
   may be extended in future but the idea still the same: external location
   are just records in DB.
   3.

   Glare saves blob size and checksum when specifying external url. When
   user specified url Glare downloads the blob by url, calculates its size and
   checksum. Of course, it leads to some performance degradation but we can
   ensure that the external blob is immutable. We made this because security
   seems more important for Glare than performance. Also there are plans to
   extend this approach to support subscriptions for external locations so we
   can increase secureness of that operation.


I think that some of the features above can be implemented in Glance. For
example, we can treat our locations as only read-only links if external
flag will be implemented.  It will allow us to ensure that only blobs
uploaded through the Glance will be managed.

Additionally, if we will calculate checksum and size for external urls, we
can ensure that  all multiple locations refers to the same blob. So
management of multiple locations(deletion/creation) can be more secure.
Also we can ensure that the external url blob was not changed.

I understand that we need a spec for that but I would like to discuss this
at high level first. Here is etherpad for discussion:
https://etherpad.openstack.org/p/glare-locations


Best regards,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][keystone] is there chance the keystone cached the catalog and can not get the latest endpoints?

2016-06-29 Thread Kairat Kushaev
Hi,
Looks like this bug is duplicate of
https://bugs.launchpad.net/oslo.cache/+bug/1590779
HTH

Best regards,
Kairat Kushaev

On Wed, Jun 29, 2016 at 3:32 PM, Jeffrey Zhang <zhang.lei@gmail.com>
wrote:

> In Kolla CI, we see[1]
>
> publicURL endpoint for compute service not found
>
> this error for many times. The bug is here[0]
>
>
> After some debug, I found the endpoint is exist in the DB. But when
> running `nova service-list`
> It says `publicURL endpoint for compute service not found
> ​`​. After a few seconds, when you run
> the `nova service-list` again. it works as expected.
>
> I think the root cause it the keystone. seem that the keystone cached the
> catalog, and return
> the cached version without query it from the DB. So anyone can explain why
> it happen? and how to
> avoid this( any workaround?)
>
> The env:
> OS/Docker image: no matter, this happen on both CentOS and Ubuntu
> OpenStack: master branch
>
> [0] https://bugs.launchpad.net/kolla/+bug/1587226
> [1]
> http://logs.openstack.org/91/328891/5/check/gate-kolla-dsvm-deploy-oraclelinux-source/3af433c/console.html#_2016-06-29_10_10_33_298298
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-14 Thread Kairat Kushaev
Hi all,
it looks like Glare may cover some (or all) your use cases.
To understand the functionality proposed to Glare I suggest you to read
this:
https://review.openstack.org/#/c/283136/.
It would be cool if Glare supports everything you need.  If you need
something else please add a comment, we will try to consider your
requirements.
You can also contact me(kairat), Mike Fedosin (mfedosin) and Nikhil Komawar
(nikhil) for any questions related to Glare.

Best regards,
Kairat Kushaev

On Tue, Jun 14, 2016 at 10:43 AM, Flavio Percoco <fla...@redhat.com> wrote:

> On 13/06/16 18:46 +, Hongbin Lu wrote:
>
>>
>>
>> -Original Message-
>>> From: Sudipto Biswas [mailto:sbisw...@linux.vnet.ibm.com]
>>> Sent: June-13-16 1:43 PM
>>> To: openstack-dev@lists.openstack.org
>>> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
>>>
>>>
>>>
>>> On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:
>>> > On 12/06/16 22:10 +, Hongbin Lu wrote:
>>> >> Hi team,
>>> >>
>>> >> During the team meetings these weeks, we collaborated the initial
>>> >> project roadmap. I summarized it as below. Please review.
>>> >>
>>> >> * Implement a common container abstraction for different container
>>> >> runtimes. The initial implementation will focus on supporting basic
>>> >> container operations (i.e. CRUD).
>>> >
>>> > What COE's are being considered for the first implementation? Just
>>> > docker and kubernetes?
>>>
>> [Hongbin Lu] Container runtimes, docker in particular, are being
>> considered for the first implementation. We discussed how to support COEs
>> in Zun but cannot reach an agreement on the direction. I will leave it for
>> further discussion.
>>
>> >
>>> >> * Focus on non-nested containers use cases (running containers on
>>> >> physical hosts), and revisit nested containers use cases (running
>>> >> containers on VMs) later.
>>> >> * Provide two set of APIs to access containers: The Nova APIs and
>>> the
>>> >> Zun-native APIs. In particular, the Zun-native APIs will expose full
>>> >> container capabilities, and Nova APIs will expose capabilities that
>>> >> are shared between containers and VMs.
>>> >
>>> > - Is the nova side going to be implemented in the form of a Nova
>>> > driver (like ironic's?)? What do you mean by APIs here?
>>>
>> [Hongbin Lu] Yes, the plan is to implement a Zun virt-driver for Nova.
>> The idea is similar to Ironic.
>>
>> >
>>> > - What operations are we expecting this to support (just CRUD
>>> > operations on containers?)?
>>>
>> [Hongbin Lu] We are working on finding the list of operations to support.
>> There is a BP for tracking this effort:
>> https://blueprints.launchpad.net/zun/+spec/api-design .
>>
>> >
>>> > I can see this driver being useful for specialized services like
>>> Trove
>>> > but I'm curious/concerned about how this will be used by end users
>>> > (assuming that's the goal).
>>>
>> [Hongbin Lu] I agree that end users might not be satisfied by basic
>> container operations like CRUD. We will discuss how to offer more to make
>> the service to be useful in production.
>>
>
> I'd probably leave this out for now but this is just my opinion.
> Personally, I
> think users, if presented with both APIs - nova's and Zun's - they'll
> prefer
> Zun's.
>
> Specifically, you don't interact with a container the same way you
> interact with
> a VM (but I'm sure you know all these way better than me). I guess my
> concern is
> that I don't see too much value in this other than allowing specialized
> services
> to run containers through Nova.
>
>
> >
>>> >
>>> >> * Leverage Neutron (via Kuryr) for container networking.
>>> >> * Leverage Cinder for container data volume.
>>> >> * Leverage Glance for storing container images. If necessary,
>>> >> contribute to Glance for missing features (i.e. support layer of
>>> >> container images).
>>> >
>>> > Are you aware of https://review.openstack.org/#/c/249282/ ?
>>> This support is very minimalistic in nature, since it doesn't do
>>> anything beyond just storing a docker FS tar ball.
>>> I think it was felt that, further support for docker FS was needed.
>>> While there were suggestions 

Re: [openstack-dev] [nova][glance][qa] - Nova glance v2 work complete

2016-06-10 Thread Kairat Kushaev
\0/
That's awesome.
Big thanks to mfedosin and sudipto for driving this work.

Best regards,
Kairat Kushaev

On Fri, Jun 10, 2016 at 2:52 PM, Monty Taylor <mord...@inaugust.com> wrote:

> On 06/10/2016 01:19 PM, Sean Dague wrote:
> > On 06/07/2016 04:55 PM, Matt Riedemann wrote:
> >> I tested the glance v2 stack (glance v1 disabled) using a devstack
> >> change here:
> >>
> >> https://review.openstack.org/#/c/325322/
> >>
> >> Now that the changes are merged up through the base nova image proxy and
> >> the libvirt driver, and we just have hyper-v/xen driver changes for that
> >> series, we should look at gating on this configuration.
> >>
> >> I was originally thinking about adding a new job for this, but it's
> >> probably better if we just change one of the existing integrated gate
> >> jobs, like gate-tempest-dsvm-full or gate-tempest-dsvm-neutron-full.
> >>
> >> Does anyone have an issue with that? Glance v1 is deprecated and the
> >> configuration option added to nova (use_glance_v1) defaults to True for
> >> compat but is deprecated, and the Nova team plans to drop it's v1 proxy
> >> code in Ocata. So it seems like changing config to use v2 in the gate
> >> jobs should be a non-issue. We'd want to keep at least one integrated
> >> gate job using glance v1 to make sure we don't regress anything there in
> >> Newton.
> >
> > use_glance_v1=False has now been merged as the default, so all jobs are
> > now using glance v2 for the Nova <=> Glance communication -
> > https://review.openstack.org/#/c/321551/
> >
> > Thanks to Mike and Sudipta for pushing this to completion.
>
> Congrats everybody!!!
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-22 Thread Kairat Kushaev
+1

Best regards,
Kairat Kushaev

On Sat, May 21, 2016 at 1:00 AM, Nikhil Komawar <nik.koma...@gmail.com>
wrote:

> Hello all,
>
>
> I want to propose having a dedicated virtual sync next week Thursday May
> 26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
> Glance. We are making a few updates to the spec; so it would be good to
> have everyone on the same page and soon start merging those spec changes.
>
>
> Also, I would like for this sync to be cross project one so that all the
> different stakeholders are aware of the updates to this work even if you
> just want to listen in.
>
>
> Please vote with +1, 0, -1. Also, if the time doesn't work please
> propose 2-3 additional time slots.
>
>
> We can decide later on the tool and I will setup agenda if we have
> enough interest.
>
>
> [1]
>
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html
>
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] How to add location in the image

2016-04-11 Thread Kairat Kushaev
It must be valid http reference to image file that can be downloaded by
Glance.
For example:
http://cdimage.debian.org/cdimage/openstack/testing/debian-testing-openstack-amd64.qcow2
You can also ask similar questions in #openstack-glance IRC channel.


Best regards,
Kairat Kushaev

On Mon, Apr 11, 2016 at 11:17 AM, Pankaj Mishra <pm.mishra...@gmail.com>
wrote:

> Hi All,
>
> I created image and I want to add location in the created image.
> For that I am executing this command from glance cli.
>
> usage: glance --os-image-api-version 2 location-add --url  [--metadata 
> ] 
>
>
> So here what is the . What should I pass  to add the location.
>
> Please anybody can help me out to execute this command.
>
> Thanks,
> Pankaj
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] New bug for Mitaka: Glance authentication fails after token expiry

2016-03-30 Thread Kairat Kushaev
Hi,
I proposed a fix for that: https://review.openstack.org/299136.
Looking forward for yout feedback in that.

I also reviewed Congress patch a bit and I am wondering if Keystone
sessions is appropriate approach to work with client in services like
Congress.



Best regards,
Kairat Kushaev

On Tue, Mar 29, 2016 at 10:56 PM, Nikhil Komawar <nik.koma...@gmail.com>
wrote:

> Thanks for bringing this up Eric!
>
> On 3/29/16 4:01 PM, Eric K wrote:
> > I just discovered a bug that¹s probably been around a long time but
> hidden
> > by exception suppression.
> https://bugs.launchpad.net/congress/+bug/1563495
> > When an auth attempt fails due to token expiry, Congress Glance driver
> > obtains a new token from keystone and sets it in Glance client, but for
> > some reason, Glance client continues to use the expired token and fails
> to
> > authenticate. Glance data stops flowing to Congress. It might explain the
> > issue Bryan Sullivan ran into
> > (
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/087364.ht
> > ml).
> >
> > I haven¹t been able to nail down whether it¹s a Congress datasource
> driver
> > issue or a Glance client issue. A few more eyes on it would be great.
> > Thanks!
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-20 Thread Kairat Kushaev

Also curious about this. It seems weird to separate the 'positive' and
the 'negative' ones, assuming those patches are mostly contributed by
the same group of developers.


Yeah, agree. This approach leads to situation when I need to look at two
places to observe test coverage for each component.
Also when I would like to add some tests I need to contribute to two places
which is not convenient for reviewers for contributors IMO.


Best regards,
Kairat Kushaev

On Thu, Mar 17, 2016 at 8:31 AM, Qiming Teng <teng...@linux.vnet.ibm.com>
wrote:

> >
> > I'd love to see this idea explored further. What happens if Tempest
> > ends up without tests, as a library for shared code as well as a
> > centralized place to run tests from via plugins?
> >
>
> Also curious about this. It seems weird to separate the 'positive' and
> the 'negative' ones, assuming those patches are mostly contributed by
> the same group of developers.
>
> Qiming
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-25 Thread Kairat Kushaev
>
> os1:~> set | grep ^OS_
> OS_AUTH_URL=http://10.42.0.50:5000/v2.0
> OS_CACERT=
> OS_IDENTITY_API_VERSION=2.0
> OS_NO_CACHE=1
> OS_PASSWORD=pass
> OS_PROJECT_NAME=demo
> OS_REGION_NAME=RegionOne
> OS_USERNAME=demo
> OS_VOLUME_API_VERSION=2
>
> os1:~> cinder list
> ERROR: You must provide a tenant_name, tenant_id, project_id or
> project_name (with project_domain_name or project_domain_id) via
> --os-tenant-name (env[OS_TENANT_NAME]),  --os-tenant-id
> (env[OS_TENANT_ID]),  --os-project-id (env[OS_PROJECT_ID])
> --os-project-name (env[OS_PROJECT_NAME]),  --os-project-domain-id
> (env[OS_PROJECT_DOMAIN_ID])  --os-project-domain-name
> (env[OS_PROJECT_DOMAIN_NAME])
>
> os1:~> glance image-list
> You must provide a project_id or project_name (with project_domain_name
> or project_domain_id) via   --os-project-id (env[OS_PROJECT_ID])
> --os-project-name (env[OS_PROJECT_NAME]),  --os-project-domain-id
> (env[OS_PROJECT_DOMAIN_ID])  --os-project-domain-name
> (env[OS_PROJECT_DOMAIN_NAME])


It looks like project names are unique within domain.
So clients require project domain to be specified for v3.
Otherwise they raise an error.

Best regards,
Kairat Kushaev


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Glance Core team additions/removals

2016-02-02 Thread Kairat Kushaev
Thank You! I will try to do my best in contributing to Glance.

Best regards,
Kairat Kushaev

On Tue, Feb 2, 2016 at 8:11 PM, Flavio Percoco <fla...@redhat.com> wrote:

> On 26/01/16 10:11 -0430, Flavio Percoco wrote:
>
>>
>> Greetings,
>>
>> I'd like us to have one more core cleanup for this cycle:
>>
>> Additions:
>>
>> - Kairat Kushaev
>> - Brian Rosmaita
>>
>> Both have done amazing reviews either on specs or code and I think they
>> both
>> would be an awesome addition to the Glance team.
>>
>> Removals:
>>
>> - Alexander Tivelkov
>> - Fei Long Wang
>>
>> Fei Long and Alexander are both part of the OpenStack community. However,
>> their
>> focus and time has shifted from Glance and, as it stands right now, it
>> would
>> make sense to have them both removed from the core team. This is not
>> related to
>> their reviews per-se but just prioritization. I'd like to thank both,
>> Alexander
>> and Fei Long, for their amazing contributions to the team. If you guys
>> want to
>> come back to Glance, please, do ask. I'm sure the team will be happy to
>> have you
>> on board again.
>>
>> To all other members of the community. Please, provide your feedback.
>> Unless
>> someone objects, the above will be effective next Tuesday.
>>
>
>
> The following steps were taken:
>
> - Kairat and Brian have been added. Welcome and thanks for joining
> - Fei Long was kept as core. Thanks a lot for weighting in and catching up
> with Glance.
> - Alexander has been removed. I can't stress how sad it is to see Alex go.
> I
> hope Alex will be able to join in the not so far future.
>
>
> Cheers,
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Keystone][CI]

2016-01-11 Thread Kairat Kushaev
Hi bharath,
I think it is better to create a bug about that to discuss this issue.
I am not sure that Openstack Dev Mailing List is appropriate place to
discuss bugs.

BTW, your trouble looks similar to
https://bugs.launchpad.net/devstack/+bug/1515352. Could you please try WA
described in the bug?

Best regards,
Kairat Kushaev

On Mon, Jan 11, 2016 at 9:50 AM, bharath <bhar...@brocade.com> wrote:

> Hi,
>
>
> Facing "Could not determine a suitable URL for the plugin " Error during
> stacking.
> issue is seen only after repeated stack , unstack in a setup.
> It will be fine 10 to 15 CI runs later it is starting throwing below error.
> I am able to consistently reproduce it
>
> 2016-01-11 06:28:06.724 | + '[' bare = bare ']'
> 2016-01-11 06:28:06.724 | + '[' '' = zcat ']'
> 2016-01-11 06:28:06.724 | + openstack --os-cloud=devstack-admin image
> create cirros-0.3.2-x86_64-disk --public --container-format=bare
> --disk-format qcow2
> 2016-01-11 06:28:07.308 | Could not determine a suitable URL for the plugin
> 2016-01-11 06:28:07.330 | + exit_trap
> 2016-01-11 06:28:07.330 | + local r=1
> 2016-01-11 06:28:07.331 | ++ jobs -p
> 2016-01-11 06:28:07.331 | + jobs=
>
>
> Thanks,
> bharath
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Seeking FFE for "Prevention of Unauthorized errors in Swift driver"

2016-01-11 Thread Kairat Kushaev
Hello Glance Team!
I would like to request FFE for the following spec:
https://review.openstack.org/#/c/248681/.

The change allows users not to bother about token expiration when
uploading/downloading big images to/from swift backend which is pretty
useful IMO.  The change is not visible to end users and it affects only the
case when token is expired during image upload/download. This makes the
change less risky.

Best regards,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] glance details

2015-12-24 Thread Kairat Kushaev
Hello,
Do you have "checksum" field in the output of image-show or image-list?

P.S. Usually it is better to use ask.openstack.org or irc channel:
#openstack-glance for such questions.
​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Auth_version from 'old style' URLs in the database

2015-12-04 Thread Kairat Kushaev
Hi,
there is another potential risk when using urls from database. Once
keystone v2 would be down there
is no way to request image using urls in database like 10.0.0.8:5000/v2.0.
<http://10.0.0.8:5000/v2.0>
The only possible option is to update db entries in glance but I am not
sure that it is correct solution.

I am wondering what people would prefer to do, to support the 'old style'
> urls
> and therefore parse the version from the url. Or to make the auth_version
> common and potentially break the 'old style' database entries.
>
<http://10.0.0.8:5000/v2.0>

Is there a way to prevent this? Can't we ignore auth_url from such entries
and use the auth_address
from glance_store configuration instead?

Best regards,
Kairat Kushaev

On Thu, Dec 3, 2015 at 7:24 PM, Bunting, Niall <niall.bunt...@hpe.com>
wrote:

> Hi,
>
> Currently glance will use an auth_url if in the database. Eg.
> 10.0.0.8:5000/v2.0
>
> However glance currently takes the auth_version from the config
> files. Therefore this can lead to a mismatch of keystone version to be used
> between the url and the config files. This is problematic due to a
> different
> resource id being required in different version of keystone (in keystone v2
> it was /v2.0/tokens in keystone v3 it is /v3/auth/tokens).
>
> Using a v2 url and config file with keystone v3:
> 10.0.0.8:5000/v2.0/auth/tokens -- Fails to authenticate the user,
> and user can't download image.
>
> See https://bugs.launchpad.net/glance-store/+bug/1507610 for a bug report
> on this.
>
> This means that the fix proposed by
> https://review.openstack.org/#/c/238074/ parses the URL for an
> auth_version
> and then if found will use the parsed value as the auth_version rather than
> the one from the config files. Taking the url as the true source.
> Therefore the image will still work as the auth_version used by glance is
> the
> one defined in the URL meaning the correct resource id appended.
>
> Whilst discussing it with Kairat it was proposed that we ignore the
> keystone version in the URL and if it does not support the auth_version
> in the configs, then the image would fail to be downloaded. This is due to
> a
> preference to have a centralised auth_version value.
>
> I am wondering what people would prefer to do, to support the 'old style'
> urls
> and therefore parse the version from the url. Or to make the auth_version
> common and potentially break the 'old style' database entries.
>
> Thanks,
> Niall Bunting
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Models and validation for v2

2015-10-01 Thread Kairat Kushaev
Yep, the way we removed the validation is not good long term solution (IMO)
because we still requesting the schema for unvalidated_model and I am not
sure why do we need it.
I will create a spec about it soon so we can discuss it in more details.

Best regards,
Kairat Kushaev

On Thu, Oct 1, 2015 at 2:44 PM, <stuart.mcla...@hp.com> wrote:

>
> We've been taking validation out as issues have been reported (it was
> removed from image-list recently for example).
>
> Removing across the board probably does make sense.
>
>
>> Agree with you. That's why I am asking about reasoning. Perhaps, we need
>> to
>> realize how to get rid of this in glanceclient.
>>
>> Best regards,
>> Kairat Kushaev
>>
>> On Wed, Sep 30, 2015 at 7:04 PM, Jay Pipes <jaypi...@gmail.com> wrote:
>>
>> On 09/30/2015 09:31 AM, Kairat Kushaev wrote:
>>>
>>> Hi All,
>>>> In short terms, I am wondering why we are validating responses from
>>>> server when we are doing
>>>> image-show, image-list, member-list, metadef-namespace-show and other
>>>> read-only requests.
>>>>
>>>> AFAIK, we are building warlock models when receiving responses from
>>>> server (see [0]). Each model requires schema to be fetched from glance
>>>> server. It means that each time we are doing image-show, image-list,
>>>> image-create, member-list and others we are requesting schema from the
>>>> server. AFAIU, we are using models to dynamically validate that object
>>>> is in accordance with schema but is it the case when glance receives
>>>> responses from the server?
>>>>
>>>> Could somebody please explain me the reasoning of this implementation?
>>>> Am I missed some usage cases when validation is required for server
>>>> responses?
>>>>
>>>> I also noticed that we already faced some issues with such
>>>> implementation that leads to "mocking" validation([1][2]).
>>>>
>>>>
>>> The validation should not be done for responses, only ever requests (and
>>> it's unclear that there is value in doing this on the client side at all,
>>> IMHO).
>>>
>>> -jay
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/5b5dba74/attachment-0001.html
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Models and validation for v2

2015-09-30 Thread Kairat Kushaev
Agree with you. That's why I am asking about reasoning. Perhaps, we need to
realize how to get rid of this in glanceclient.

Best regards,
Kairat Kushaev

On Wed, Sep 30, 2015 at 7:04 PM, Jay Pipes <jaypi...@gmail.com> wrote:

> On 09/30/2015 09:31 AM, Kairat Kushaev wrote:
>
>> Hi All,
>> In short terms, I am wondering why we are validating responses from
>> server when we are doing
>> image-show, image-list, member-list, metadef-namespace-show and other
>> read-only requests.
>>
>> AFAIK, we are building warlock models when receiving responses from
>> server (see [0]). Each model requires schema to be fetched from glance
>> server. It means that each time we are doing image-show, image-list,
>> image-create, member-list and others we are requesting schema from the
>> server. AFAIU, we are using models to dynamically validate that object
>> is in accordance with schema but is it the case when glance receives
>> responses from the server?
>>
>> Could somebody please explain me the reasoning of this implementation?
>> Am I missed some usage cases when validation is required for server
>> responses?
>>
>> I also noticed that we already faced some issues with such
>> implementation that leads to "mocking" validation([1][2]).
>>
>
> The validation should not be done for responses, only ever requests (and
> it's unclear that there is value in doing this on the client side at all,
> IMHO).
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Models and validation for v2

2015-09-30 Thread Kairat Kushaev
Hi All,
In short terms, I am wondering why we are validating responses from server
when we are doing
image-show, image-list, member-list, metadef-namespace-show and other
read-only requests.

AFAIK, we are building warlock models when receiving responses from server
(see [0]). Each model requires schema to be fetched from glance server. It
means that each time we are doing image-show, image-list, image-create,
member-list and others we are requesting schema from the server. AFAIU, we
are using models to dynamically validate that object is in accordance with
schema but is it the case when glance receives responses from the server?

Could somebody please explain me the reasoning of this implementation? Am I
missed some usage cases when validation is required for server responses?

I also noticed that we already faced some issues with such implementation
that leads to "mocking" validation([1][2]).


[0]:
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L185
[1]:
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L47
[2]: https://bugs.launchpad.net/python-glanceclient/+bug/1501046

Best regards,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-07 Thread Kairat Kushaev
Hello Jason,
Agree with TianTian. It would be good if you provide more details about the
error you have.
Additionally, it would be perfect if you'll use heat IRC channel: #heat or
ask.openstack.org to resolve such kind of questions.

Best regards,
Kairat Kushaev
Software Engineer, Mirantis

On Fri, Aug 7, 2015 at 9:43 AM, TIANTIAN tiantian...@163.com wrote:

 1) OS::Neutron::Port does not seem to recognize security groups by name
 --
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/port.py#L303

 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/clients/os/neutron.py#L111
 we can recognize group name
 2) OS::Neutron::SecurityGroup has no attributes so it can not return a
 security group ID
 --
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/neutron.py#L133
 https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/neutron.py#L133,
 we can get the resource id (security group id) by function
 'get_resource'
 So what do you want? And what's the problems?


 At 2015-08-07 11:10:37, jason witkowski jwit...@gmail.com wrote:

 Hey All,

 I am having issues on the Kilo branch creating an auto-scaling template
 that builds a security group and then adds instances to it.  I have tried
 every various method I could think of with no success.  My issues are as
 such:

 1) OS::Neutron::Port does not seem to recognize security groups by name
 2) OS::Neutron::SecurityGroup has no attributes so it can not return a
 security group ID

 These issues combined find me struggling to automate the building of a
 security group and instances in one heat stack.  I have read and looked at
 every example online and they all seem to use either the name of the
 security group or the get_resource function to return the security group
 object itself.  Neither of these work for me.

 Here are my heat template files:

 autoscaling.yaml - http://paste.openstack.org/show/412143/
 redirector.yaml - http://paste.openstack.org/show/412144/
 env.yaml - http://paste.openstack.org/show/412145/

 Heat Client: 0.4.1
 Heat-Manage: 2015.1.1

 Any help would be greatly appreciated.

 Best Regards,

 Jason


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Repeating stack-delete many times

2015-02-10 Thread Kairat Kushaev
Hi all,
During the analysis of the following bug:
https://bugs.launchpad.net/heat/+bug/1418878
i figured out that orchestration engine doesn't work properly in some cases.
The case is the following:
trying to delete the same stack with resources n times in series.
It might happen if the stack deleting takes much time and a user is sending
the second delete request again.
Orchestration engine behavior is the following:
1) When first stack-delete command comes to heat service
it acquires the stack lock and sends delete request for resources
to other clients.
Unfortunately, the command does not start to delete resources from heat db.
2) At that time second stack-delete command for the same stack
comes to heat engine. It steals the stack lock, waits 0.2 (hard-coded
constant!)
sec to allow previous stack-delete command finish the operations (of
course,
the first didn't manage to finish deleting on time). After that engine
service starts
the deleting again:
 - Request resources from heat DB (They exist!)
 - Send requests for delete to other clients (They do not exist because
of
point 1).
Finally, we have stack in DELETE_FAILED state because the clients raise
exceptions during stack delete.
I have some proposals how to fix it:
p1) Make waiting time (0.2 sec) configurable. It allows to finish
stack-delete ops
before the second command starts deleting. From my point of view, it is
just
workaround because different stacks (and operations) took different time.
p2) Try to deny lock stealing if the current thread executes deleting. As
an option,
we can wait for the other thread if stack is deleting but it seems that it
is not possible
to analyze with the current solution.
p3) Just leave it as it is. IMO, the last solution.
Do you have any other proposals how to manage such kind of cases?
Perhaps there is exists more proper solution.

Thank You,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Repeating stack-delete many times

2015-02-10 Thread Kairat Kushaev
Sorry for flood,
i forgot p4:
Prohibit stack deletion if the current stack state, status = (DELETE, IN
PROGRESS).
Raise not supported exception in heat engine. It is possible because stack
state
will be updated before deleting.

On Tue, Feb 10, 2015 at 2:04 PM, Kairat Kushaev kkush...@mirantis.com
wrote:

 Hi all,
 During the analysis of the following bug:
 https://bugs.launchpad.net/heat/+bug/1418878
 i figured out that orchestration engine doesn't work properly in some
 cases.
 The case is the following:
 trying to delete the same stack with resources n times in series.
 It might happen if the stack deleting takes much time and a user is sending
 the second delete request again.
 Orchestration engine behavior is the following:
 1) When first stack-delete command comes to heat service
 it acquires the stack lock and sends delete request for resources
 to other clients.
 Unfortunately, the command does not start to delete resources from heat
 db.
 2) At that time second stack-delete command for the same stack
 comes to heat engine. It steals the stack lock, waits 0.2 (hard-coded
 constant!)
 sec to allow previous stack-delete command finish the operations (of
 course,
 the first didn't manage to finish deleting on time). After that engine
 service starts
 the deleting again:
  - Request resources from heat DB (They exist!)
  - Send requests for delete to other clients (They do not exist
 because of
 point 1).
 Finally, we have stack in DELETE_FAILED state because the clients raise
 exceptions during stack delete.
 I have some proposals how to fix it:
 p1) Make waiting time (0.2 sec) configurable. It allows to finish
 stack-delete ops
 before the second command starts deleting. From my point of view, it is
 just
 workaround because different stacks (and operations) took different time.
 p2) Try to deny lock stealing if the current thread executes deleting. As
 an option,
 we can wait for the other thread if stack is deleting but it seems that it
 is not possible
 to analyze with the current solution.
 p3) Just leave it as it is. IMO, the last solution.
 Do you have any other proposals how to manage such kind of cases?
 Perhaps there is exists more proper solution.

 Thank You,
 Kairat Kushaev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Repeating stack-delete many times

2015-02-10 Thread Kairat Kushaev
Thanks for explanation, Steven.
Will try to figure out why it is not working in nova.

On Tue, Feb 10, 2015 at 4:04 PM, Steven Hardy sha...@redhat.com wrote:

 On Tue, Feb 10, 2015 at 03:04:39PM +0400, Kairat Kushaev wrote:
 Hi all,
 During the analysis of the following bug:
 https://bugs.launchpad.net/heat/+bug/1418878
 i figured out that orchestration engine doesn't work properly in some
 cases.
 The case is the following:A
 trying to delete the same stack with resources n times in series.
 It might happen if the stack deleting takes much time and a user is
 sending
 the second delete request again.
 Orchestration engine behavior is the following:
 1) When first stack-delete command comes to heat service
 it acquires the stack lock and sends delete request for resources
 to other clients.
 Unfortunately, the command does not start to delete resources from
 heat
 db.A
 2) At that time second stack-delete command for the same stack
 comes to heat engine. It steals the stack lock, waits 0.2 (hard-coded
 constant!)A
 sec to allow previous stack-delete command finish the operations (of
 course,A
 the first didn't manage to finish deleting on time). After that engine
 service startsA
 the deleting again:
 A  A  A - Request resources from heat DB (They exist!)
 A  A  A - Send requests for delete to other clients (They do not exist
 because ofA
 A  A  A  A  point 1).

 This is expected, and the reason for the following error path in most
 resource handle_delete paths is to ignore any do not exist errors:

   self.client_plugin().ignore_not_found(e)

 Finally, we have stack in DELETE_FAILED state because the clients
 raise
 exceptions during stack delete.

 This is the bug, the exception which is raised isn't getting ignored by the
 nova client plugin, which by default only ignores NotFound exceptions:


 https://github.com/openstack/heat/blob/master/heat/engine/clients/os/nova.py#L85

 In this case, I think the problem is you're getting a Conflict exception
 when attempting to re-delete the NovaFloatingIpAssociation:


 https://github.com/openstack/heat/blob/master/heat/engine/resources/nova_floatingip.py#L148

 So, I think this is probably a bug specific to NovaFloatingIpAssociation
 rather than a problem we need to fix accross all resources?

 I'd probably suggest we either add another except clause which catches (and
 ignores) this situation, or look at if novaclient is raising the wrong
 exception type, as NotFound would appear to be a saner error than
 Conflict when trying to delete a non-existent association?

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev