[openstack-dev] (no subject)

2018-08-13 Thread Amy Marrich
Hi everyone,

If you’re running OpenStack, please participate in the User Survey
 to share more about the technology
you are using and provide feedback for the community by *August 21 - hurry,
it’s next week!!* By completing a deployment, you will qualify as an AUC
and receive a $300 USD ticket to the two upcoming Summits.

Please help us spread the word. a we're trying to gather as much real-world
deployment data as possible to share back with both the operator and
developer communities.

We are only conducting one survey this year, and the report will be
published at the Berlin Summit
.  II you would like
OpenStack user data in the meantime, check out the analytics dashboard
 updates in real time, throughout the
year.

The information provided is confidential and will only be presented in
aggregate unless you consent to make it public.

The deadline to complete the survey and be part of the next report is
next *Tuesday,
August 21** at 23:59 UTC.*

   - You can login and complete the OpenStack User Survey here:
   http://www.openstack.org/user-survey
   
   - If you’re interested in joining the OpenStack User Survey Working
   Group to help with the survey analysis, please complete this form:
   https://openstackfoundation.formstack.com/forms/user_survey_working_group
   

   - Help us promote the User Survey: https://twitter.com/Op
   enStack/status/993589356312088577
   


Please let me know if you have any questions.

Thanks,
Amy

Amy Marrich (spotz)
OpenStack User Committee
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2018-01-25 Thread Osaf Ali
osa...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2017-07-05 Thread Lawrence J. Albinson
Hi Andy,

Thank you. Yes, 15.1.6 seems good.

Kind regards, Lawrence

From: Andy McCrae
Sent: 04 July 2017 17:31
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] (no subject)

Hi Lawrence,

On 4 July 2017 at 12:29, Lawrence J. Albinson 
mailto:lawre...@ljalbinson.com>> wrote:
Dear Colleagues,

Before I go problem hunting, has anyone seen openstack-ansible builds failing 
at release 15.1.5 at the following point:

TASK [lxc_container_create : Create localhost config] **

with the error 'lxc-attach: command not found'?

The self-same environment is working fine at 15.1.3.

I've turned up nothing with searches. Any help greatly appreciated.

I know there were some issues created by changes to LXC and Ansible that look 
related to what you're seeing.
Although, I believed those were resolved - 
https://review.openstack.org/#/c/475438/
Looking at the patch that merged it seems this will only be in the latest 
release (which just got released!)
Try updating to 15.1.6 . and see if that resolves it, if not let us know.

Kind Regards,
Andy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2017-07-04 Thread Andy McCrae
Hi Lawrence,

On 4 July 2017 at 12:29, Lawrence J. Albinson 
wrote:

> Dear Colleagues,
>
> Before I go problem hunting, has anyone seen openstack-ansible builds
> failing at release 15.1.5 at the following point:
>
> TASK [lxc_container_create : Create localhost config]
> **
>
> with the error 'lxc-attach: command not found'?
>
> The self-same environment is working fine at 15.1.3.
>
> I've turned up nothing with searches. Any help greatly appreciated.
>

I know there were some issues created by changes to LXC and Ansible that
look related to what you're seeing.
Although, I believed those were resolved -
https://review.openstack.org/#/c/475438/
Looking at the patch that merged it seems this will only be in the latest
release (which just got released!)
Try updating to 15.1.6 . and see if that resolves it, if not let us know.

Kind Regards,
Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2017-07-04 Thread Lawrence J. Albinson
Dear Colleagues,

Before I go problem hunting, has anyone seen openstack-ansible builds failing 
at release 15.1.5 at the following point:

TASK [lxc_container_create : Create localhost config] **

with the error 'lxc-attach: command not found'?

The self-same environment is working fine at 15.1.3.

I've turned up nothing with searches. Any help greatly appreciated.

Kind regards, Lawrence

Lawrence J Albinson

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2016-08-17 Thread Tom Fifield

On 17/08/16 15:19, UnitedStack 张德通 wrote:

i want to join openstack mailing list


You're on it ^_^


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2016-08-17 Thread UnitedStack 张德通
i want to join openstack mailing list__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2016-07-30 Thread Farhad Sunavala
Yes, this was intentionally done.The logical-source-port is important only at 
the point of classification.All successive classifications rely only on the 5 
tuple and MPLS label (chain ID).
Consider an extension of the scenario you mention below.
Sources: (similar to your case)a b
Port-pairs: (added ppe and ppf)ppcppdppeppf
Port-pair-groups: (added ppge and ppgf)ppgcppgdppgeppgf
Flow-classifiers:fc1: logical-source-port of a && tcpfc2: logical-source-port 
of b && tcp
Port-chains:pc1: fc1 && (ppgc + ppge)pc2: fc2 && (ppgd + ppgc + ppgf)


The flow-classifier has logical-src-port and protocol=tcpThe logical-src-port 
has no relevance in the middle of the chain.
In the middle of the chain, the only relevant flow-classifier is protocol=tcp.
If we allow it, we cannot distinguish TCP traffic coming out of ppgc (and 
subsequently ppc) as to whether to mark it with the label for pc1 or the label 
for pc2.
In other words, within a tenant the flow-classifiers need to be unique wrt the 
5 tuples.
thanks,Farhad.
Date: Fri, 29 Jul 2016 18:01:05 +0300
From: Artem Plakunov 
To: openst...@lists.openstack.org
Subject: [Openstack] [networking-sfc] Flow classifier conflict logic
Message-ID: <579b6fb1.3030...@lvk.cs.msu.su>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hello.
We have two deployments with networking-sfc:
mirantis 8.0 (liberty) and mirantis 9.0 (mitaka).

I noticed a difference in how flow classifiers conflict with each other 
which I do not understand. I'm not sure if it is a bug or not.

I did the following on mitaka:
1. Create tenant 1 and network 1
2. Launch vms A and B in network 1
3. Create tenant 2, share network 1 to it with RBAC policy, launch vm C 
in network 1
4. Create tenant 3, share network 1 to it with RBAC policy, launch vm D 
in network 1
5. Setup sfc:
    create two port pairs for vm C and vm D with a bidirectional port
    create two port pair groups with these pairs (one pair in one group)
    create flow classifier 1: logical-source-port = vm A port, protocol 
= tcp
    create flow classifier 2: logical-source-port = vm B port, protocol 
= tcp
    create chain with group 1 and classifier 1
    create chain with group 2 and classifier 2 - this step gives the 
following error:

Flow Classifier 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow 
Classifier 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain 
d1070955-fae9-4483-be9e-0e30f2859282.
Neutron server returns request_ids: 
['req-9d0eecec-2724-45e8-84b4-7ccf67168b03']

The only thing neutron logs have is this from server.log:
2016-07-29 14:15:57.889 18917 INFO neutron.api.v2.resource 
[req-9d0eecec-2724-45e8-84b4-7ccf67168b03 
0b807c8616614b84a4b16a318248d28c 9de9dcec18424398a75a518249707a61 - - -] 
create failed (client error): Flow Classifier 
7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow Classifier 
4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain 
d1070955-fae9-4483-be9e-0e30f2859282.

I tried the same in liberty and it works and sfc successfully routes 
traffic from both vms to their respective port groups

Liberty setup:
neutron version 7.0.4
neutronclient version 3.1.1
networking-sfc version 1.0.0 (from pip package)

Mitaka setup:
neutron version 8.1.1
neutronclient version 5.0.0 (tried using 3.1.1 with same outcome)
networking-sfc version 1.0.1.dev74 (from master branch commit 
6730b6810355761cf55f04a40cd645f065f15752)

I'll attach the output of commands neutron port-list, port-pair-list, 
port-pair-group-list, flow-classifier-list and port-chain-list.

Is this an intended flow classifier behavior? If so, why? The port 
chains and all their participants are different.
-- next part --
root@node-8:~# neutron port-list
+--+--+---+--+
| id                                  | name | mac_address      | fixed_ips     
                                                                       |
+--+--+---+--+
| 0a75ef50-3d06-467b-8321-a0b9dc406a2b |      | fa:16:3e:e0:48:81 | 
{"subnet_id": "533598bc-0bfd-4e92-9133-33ffe5043d57", "ip_address": 
"172.20.2.168"}  |
| 0c88fc4a-83f7-4194-bb9c-1b5864795e18 |      | fa:16:3e:f3:e9:ea | 
{"subnet_id": "69838436-ff18-40c4-bc62-8811e4ef6c7c", "ip_address": 
"192.168.44.2"}  |
| 0f6bddbb-a5a6-459a-a9b3-d4ae0806e5a6 |      | fa:16:3e:f7:27:1f | 
{"subnet_id": "1e69d4a3-9696-49c0-a5b7-5de71d7db0b5", "ip_address": 
"10.0.40.3"}    |
| 1731aae5-cd3a-4373-b9b9-6bca775ea4c6 |      | fa:16:3e:d7:0f:87 | 
{"subnet_id": "69838436-ff18-40c4-bc62-8811e4ef6c7c", "ip_address": 
"192.168.44.6"}  |
| 1c15d87e-78dd-40b8-ba68-13f55017be01 |      | fa:16:3e:a8:fe:ca | 
{"subnet_id": "533598bc-0bfd-4e92-9133-33ffe5043d57", "ip_address": 
"172.20.2.130"}  |
| 1e707e4c-e75a-475a-b166-7d4e4

[openstack-dev] (no subject)

2016-06-27 Thread Sergii Golovatiuk
Hi,

I would like to nominate Maksim Malchuk to Fuel-Library Core team. He’s
been doing a great job so far [0]. He’s #2 reviewer and #2 contributor with
28 commits for last 90 days [1][2].

Fuelers, please vote with +1/-1 for approval/objection. Voting will be open
until July of 4th. This will go forward after voting is closed if there are
no objections.

Overall contribution:
[0] http://stackalytics.com/?user_id=mmalchuk
Fuel library contribution for last 90 days:
[1]  
http://stackalytics.com/report/contribution/fuel-library/90
http://stackalytics.com/report/users/mmalchuk
List of reviews:
[2]
https://review.openstack.org/#/q/reviewer:%22Maksim+Malchuk%22+status:merged,n,z
--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2016-03-28 Thread Alexey Shtokolov
Fuelers!

I'm glad to announce that all patches [0] were merged!

Many thanks to all of you who help us to make Fuel more flexible and
unlimited.

Special thanks to our code and design reviewers: Igor Kalnitsky, Sergii
Golovatiuk, Vitaly Kramskikh.

And especially to the Team: Bulat Gaifullin, Ilya Kutukov, Vladimir
Sharshov, Julia Aranovich, Stanislaw Bogatkin, Evgeniy L and Vladimir Kuklin
My sincerest thanks and appreciation for your great efforts, sleepless
nights and sense of purpose!

WBR, Alexey Shtokolov

[0] - https://goo.gl/kSwej5

2016-03-25 21:48 GMT+03:00 Vladimir Kozhukalov :

> Granted. New deadline is 21:00 UTC 03/28/2016.
>
> Vladimir Kozhukalov
>
> On Fri, Mar 25, 2016 at 8:17 PM, Alexey Shtokolov  > wrote:
>
>> Fuelers!
>>
>>
>> We are very close to landing our feature "Unlock Settings Tab". But we
>> still have a set of reviews [0] to be merged due to several reasons (incl.
>> the migration to python27-db gates on OpenStack Infra). I would like to
>> request extra time till Monday to land them.
>>
>>
>> [0] - https://goo.gl/kSwej5
>>
>> 2016-03-14 11:00 GMT+03:00 Dmitry Borodaenko :
>>
>>> Thanks for working this out! Confirming that task history remains
>>> included in the scope of this FFE until the merge deadline, March 24.
>>>
>>> On Fri, Mar 11, 2016 at 11:48:51PM +0200, Igor Kalnitsky wrote:
>>> > Hey Dmitry,
>>> >
>>> > I confirm that we agreed on feature design, and you can proceed with
>>> > granting exception.
>>> >
>>> > ,- Igor
>>> >
>>> > On Fri, Mar 11, 2016 at 8:27 PM, Alexey Shtokolov
>>> >  wrote:
>>> > > Hi Dmitry,
>>> > >
>>> > > We've reached the design consensus with Igor today. Could you please
>>> remove
>>> > > the conditional status of the FFE request?
>>> > > As agreed: the merge deadline is March 24.
>>> > >
>>> > > --
>>> > > WBR, Alexey Shtokolov
>>> > >
>>> > > 2016-03-11 2:27 GMT+03:00 Dmitry Borodaenko <
>>> dborodae...@mirantis.com>:
>>> > >>
>>> > >> Granted. Design consensus deadline for the task history part of this
>>> > >> feature is extended to March 11. This does not change the merge
>>> deadline
>>> > >> for other parts of this feature, which is still March 24.
>>> > >>
>>> > >> --
>>> > >> Dmitry Borodaenko
>>> > >>
>>> > >>
>>> > >> On Fri, Mar 11, 2016 at 01:02:52AM +0300, Alexey Shtokolov wrote:
>>> > >> > Dmitry,
>>> > >> >
>>> > >> > We are really close to have the consensus, but we need one more
>>> meeting
>>> > >> > with Fuel-Python Component Lead Igor Kalnitsky to make the final
>>> > >> > decision.
>>> > >> > All patches [0] are on review. The meeting is scheduled for
>>> tomorrow
>>> > >> > (03/11
>>> > >> > 1:30pm CET).
>>> > >> > Could you please grant us one more day for it?
>>> > >> >
>>> > >> > [0] -
>>> > >> >
>>> https://review.openstack.org/#/q/topic:bp/store-deployment-tasks-history
>>> > >> >
>>> > >> > --
>>> > >> > WBR, Alexey Shtokolov
>>> > >> >
>>> > >> > 2016-03-04 3:13 GMT+03:00 Dmitry Borodaenko <
>>> dborodae...@mirantis.com>:
>>> > >> >
>>> > >> > > Granted, merge deadline March 24, task history part of the
>>> feature is
>>> > >> > > to
>>> > >> > > be excluded from this exception grant unless a consensus is
>>> reached by
>>> > >> > > March 10.
>>> > >> > >
>>> > >> > > Relevant part of the meeting log starts at:
>>> > >> > >
>>> > >> > >
>>> > >> > >
>>> http://eavesdrop.openstack.org/meetings/fuel/2016/fuel.2016-03-03-16.00.log.html#l-198
>>> > >> > >
>>> > >> > > --
>>> > >> > > Dmitry Borodaenko
>>> > >> > >
>>> > >> > >
>>> > >> > > On Wed, Mar 02, 2016 at 06:00:40PM +0700, Vitaly Kramskikh
>>> wrote:
>>> > >> > > > Oh, so there is a spec. I was worried that this patch has
>>> > >> > > > "WIP-no-bprint-assigned-yet" string in the commit message, so
>>> I
>>> > >> > > > thought
>>> > >> > > > there is no spec for it. So the commit message should be
>>> updated to
>>> > >> > > > avoid
>>> > >> > > > such confusion.
>>> > >> > > >
>>> > >> > > > It's really good I've seen this spec. There are plans to
>>> overhaul UI
>>> > >> > > > data
>>> > >> > > > format description which we use for cluster and node settings
>>> to
>>> > >> > > > solve
>>> > >> > > some
>>> > >> > > > issues and implement long-awaited features like nested
>>> structures,
>>> > >> > > > so we
>>> > >> > > > might also want to deprecate our expression language and also
>>> switch
>>> > >> > > > to
>>> > >> > > > YAQL (and thus port YAQL to JS).
>>> > >> > > >
>>> > >> > > > 2016-03-02 17:17 GMT+07:00 Vladimir Kuklin <
>>> vkuk...@mirantis.com>:
>>> > >> > > >
>>> > >> > > > > Vitaly
>>> > >> > > > >
>>> > >> > > > > Thanks for bringing this up. Actually the spec has been on
>>> review
>>> > >> > > > > for
>>> > >> > > > > almost 2 weeks: https://review.openstack.org/#/c/282695/.
>>> > >> > > > > Essentially,
>>> > >> > > > > this is not introducing new DSL but replacing the existing
>>> one
>>> > >> > > > > with
>>> > >> > > more
>>> > >> > > > > powerful extendable language which is being actively
>>> developed
>>> > >> > > > > within
>>> > 

[openstack-dev] (no subject)

2016-03-06 Thread 郝启臣
I have a blueprint for glance,can anybody give me some advice?

blueprint link:

https://blueprints.launchpad.net/glance-store/+spec/image-compress

When we make a qcow2 image and prepare to upload it to glance,the image
usually be large and can be compress,so it's better to compress the image
in the client side and upload the compressed image to glance,which will
save much store space.
compress wiki--https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files

Also,the image store is not safe in glance,if someone can access the
directory,he can get the image file and steal the information of the
image,so we'd better encrypt the image(use libvirt,
https://libvirt.org/formatstorageencryption.html).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2016-02-22 Thread ghe . rivero
From: Ghe Rivero 
Subject: Re: [openstack-dev] [all] A proposal to separate the design summit

Quoting Clayton O'Neill (2016-02-22 10:27:04)
> Is the expectation that the ops mid-cycle would continue separately,
> or be held with the meeting formerly known as the Design Summit?
> 
> Personally I’d prefer they be held together, but scheduled with the
> thought that operators aren’t likely to be interested in work
> sessions, but that a good number of us would be interested in
> cross-project and some project specific planning sessions.  This would
> also open up the possibility of having some sessions specific intended
> for operator/developer feedback sessions.

+1

Ghe Rivero

> On Mon, Feb 22, 2016 at 12:15 PM, Lauren Sell  wrote:
> >
> >> On Feb 22, 2016, at 8:52 AM, Clayton O'Neill  wrote:
> >>
> >> I think this is a great proposal, but like Matt I’m curious how it
> >> might impact the operator sessions that have been part of the Design
> >> Summit and the Operators Mid-Cycle.
> >>
> >> As an operator I got a lot out of the cross-project designs sessions
> >> in Tokyo, but they were scheduled at the same time as the Operator
> >> sessions.  On the other hand, the work sessions clearly aren’t as
> >> useful to me.  It would be nice would be worked out so that the new
> >> design summit replacement was in the same location, and scheduled so
> >> that the operator specific parts were overlapping the work sessions
> >> instead of the more big picture sessions.
> >
> > Great question. The current plan is to maintain the ops summit and 
> > mid-cycle activities.
> >
> > The new format would allow us to reduce overlap between ops summit and 
> > cross project sessions at the main event, both for the operators and 
> > developers who want to be involved in either activity.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2015-08-27 Thread Adrian Otto
Let's get these details into the QuickStart doc so anyone else hitting this can 
be clued in.

--
Adrian

On Aug 27, 2015, at 9:38 PM, Vikas Choudhary 
mailto:choudharyvika...@gmail.com>> wrote:


Hi Stanislaw,


I also faced similar issue.Reason might be that from inside master instance 
openstack heat service is not reachable.
Please check /var/log/cloud-init-log  for any connectivity related error 
message and if found try manually whichever command has failed with correct  
url.



If this is the issue, you need to set correct HOST_IP in localrc.



-Vikas Choudhary


___
Hi Stanislaw,

Your host with Fedora should have special config file, which will send
signal to WaitCondition.
For good example please take a look this template
 
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml


Also the best place for such question I suppose will be
https://ask.openstack.org/en/questions/


Regards,
Sergey.

On 26 August 2015 at 09:23, Pitucha, Stanislaw Izaak <
stanislaw.pitucha at 
hp.com> 
wrote:

> Hi all,
>
> I'm trying to stand up magnum according to the quickstart instructions
> with devstack.
>
> There's one resource which times out and fails: master_wait_condition. The
> kube master (fedora) host seems to be created, I can login to it via ssh,
> other resources are created successfully.
>
>
>
> What can I do from here? How do I debug this? I tried to look for the
> wc_notify itself to try manually, but I can't even find that script.
>
>
>
> Best Regards,
>
> Stanis?aw Pitucha
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at 
> lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2015-08-27 Thread Vikas Choudhary
Hi Stanislaw,

I also faced similar issue.Reason might be that from inside master
instance openstack heat service is not reachable.
Please check /var/log/cloud-init-log  for any connectivity related
error message and if found try manually whichever command has failed
with correct  url.


If this is the issue, you need to set correct HOST_IP in localrc.


-Vikas Choudhary



___
Hi Stanislaw,

Your host with Fedora should have special config file, which will send
signal to WaitCondition.
For good example please take a look this template
 
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml


Also the best place for such question I suppose will
behttps://ask.openstack.org/en/questions/


Regards,
Sergey.

On 26 August 2015 at 09:23, Pitucha, Stanislaw Izaak
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>
wrote:

>* Hi all,
*>>* I’m trying to stand up magnum according to the quickstart instructions
*>* with devstack.
*>>* There’s one resource which times out and fails: master_wait_condition. The
*>* kube master (fedora) host seems to be created, I can login to it via ssh,
*>* other resources are created successfully.
** What can I do from here? How do I debug this? I tried to look for the
*>* wc_notify itself to try manually, but I can’t even find that script.
** Best Regards,
*>>* Stanisław Pitucha
** 
__
*>* OpenStack Development Mailing List (not for usage questions)
*>* Unsubscribe: OpenStack-dev-request at lists.openstack.org
?subject:unsubscribe
*>* http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

*>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2015-08-04 Thread Mike Kolesnik
On Tue, Aug 4, 2015 at 1:02 PM, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hi all,
>
> in feature/qos, we use ml2 extension drivers to handle additional
> qos_policy_id field that can be provided thru API:
>
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2
> /extensions/qos.py?h=feature/qos
>
> What we do in qos extension is we create a database 'binding' object
> between the updated port and the QoS policy that corresponds to
> qos_policy_id. So we access the database. It means there may be some
> complications there, f.e. the policy object is not available for the
> tenant, or just does not exist. In that case, we raise an exception
> from the extension, assuming that ml2 will propagate it to the user in
> some form.
>

​First of all maybe we should be asking this on the u/s mailing list to get
a broader view?
​


>
> But it does not work. This is because _call_on_ext_drivers swallows
> exceptions:
>
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2
> /managers.py#n766
>
> It makes me ask some questions:
>
> - - first, do we use extensions as was expected? Can we extend
> extensions to cover our use case?
>

​I think we are, they mostly fit the case but as everything in Neutron it's
unripe.
However from my experience this was the ripest option available to us..
​


>
> - - second, what would be the right way to go assuming we want to
> support the case? Should we just reraise? Or maybe postpone till all
> extension drivers are called, and then propagate an exception top into
> the stack? (Probably some extension manager specific exception?) Or
> maybe we want extensions to claim whether they may raise, and handle
> them accordingly?
>

​I was thinking in order not to alter existing extension behaviours that we
can define in the ML2 extension driver scope a special exception type (sort
of exception container), and if an exception of this type is raised ​then
we should re-raise it.
I'm not sure there's much value to aggregating the exceptions right off the
bat and this can be done later on.



>
> - - alternatively, if we abuse the API and should stop doing it, which
> other options do we have to achieve similar behaviour without relying
> on ml2 extensions AND without polluting ml2 driver with qos specific cod
> e?
>
> Thanks for your answers,
> Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVwI29AAoJEC5aWaUY1u57yLYH/jhYmu4aR+ewZwSzDYXMcfdz
> tD5BSYKD/YmDMIAYprmVCqOlk1jaioesFPMUOrsycpacZZWjg5tDSrpJ2Iz5/ZPw
> BYLIPGaYF3Pu87LHrUKhIz4f2TfSWve/7GBCZ6AK6zVqCXky8A9MRfWrf774a8oF
> kexP7qQVbyrOcXxZANDa1bJuLDsb4TiTcuuDizPtuUWlMfzmtZeauyieji/g1smq
> HBO5h7zUFQ87YvBqq7ed2KhlRENxo26aSrpxTFkyyxJU9xH1J8q9W1gWO7Tw1uCV
> psaijDmlxU/KySR97Ro8m5teu+7Pcb2cg/s57WaHWuAvPNW1CmfYc/XDn2I9KlI=
> =Fo++
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2015-06-28 Thread Fox, Kevin M
App catalog needs ubiquity. Needs to be sumple for op to install
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2015-02-26 Thread Yuji Azama

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2015-02-26 Thread Yuji Azama

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2014-11-06 Thread Sukhdev Kapur
Folks,

After Maruti's lighting talk on L2 Gateway, bunch of people/vendors
expressed interest in coming up with an API for this service. The goal is
to come up with a basic set of API which can be implemented in Kilo time
frame and build upon it over time in the future.
Armando, Akihiro, and others present in this small discussion decided to
get together tomorrow morning (Friday) in the Pods area (outside Degas
Room) at 9:30am.

If anybody has any interest in this discussion or can add value to this
discussion, this will be a great opportunity to stop by.

Thanks
Sukhdev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2014-08-16 Thread qiwen tan
 [Cinder] 3'rd party CI systems: Not Whitelisted Volume
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2014-07-07 Thread Sumit Gaur
http://bloggsatt.se/wp-admin/css/afternews.php___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-05-10 Thread Shyam Prasad N

Hi Clay,
First of all, thanks for the reply.

1. How can I update the eventlet version. I installed swift from source 
(git). Pulling the latest code helps?
2. Yes. Recently my clients changed to chunked encoding for transfer. 
Are you saying chunked encoding is not supported by swift?

3. Yes, the 408s have made it to the clients from proxy servers.

Regards,
Shyam

On Friday 09 May 2014 10:45 PM, Clay Gerrard wrote:
I thought those tracebacks only showed up with old versions of 
eventlet or and eventlet_debug = true?


In my experience that normally indicates a client disconnect on a 
chucked encoding transfer request (request w/o a content-length).  Do 
you know if your clients are using transfer encoding chunked?


Are you seeing the 408 make it's way out to the client?  It wasn't 
clear to me if you only see these tracebacks on the object-servers or 
in the proxy logs as well?  Perhaps only one of the three disks 
involved in the PUT are timing out and the client still gets a 
successful response?


As the disks fill up replication and auditing is going to consume more 
disk resources - you may have to tune the concurrency and rate 
settings on those daemons.  If the errors happen consistently you 
could try running with background consistency processes temporarily 
disabled and rule out if they're causing disk contention on your setup 
with your config.


-Clay


On Fri, May 9, 2014 at 8:54 AM, Ben Nemec > wrote:


This is a development list, and your question sounds more
usage-related.  Please ask your question on the users list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Thanks.

-Ben


On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

Hi,

I have a two node swift cluster receiving continuous traffic
(mostly
overwrites for existing objects) of 1GB files each.

Soon after the traffic started, I'm seeing the following
traceback from
some transactions...
Traceback (most recent call last):
   File "/home/eightkpc/swift/swift/proxy/controllers/obj.py",
line 692,
in PUT
 chunk = next(data_source)
   File "/home/eightkpc/swift/swift/proxy/controllers/obj.py",
line 559,
in 
 data_source = iter(lambda:
reader(self.app.client_chunk_size), '')
   File "/home/eightkpc/swift/swift/common/utils.py", line
2362, in read
 chunk = self.wsgi_input.read(*args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py",
line 147,
in read
 return self._chunked_read(self.rfile, length)
   File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py",
line 137,
in _chunked_read
 self.chunk_length = int(rfile.readline().split(";",
1)[0], 16)
ValueError: invalid literal for int() with base 16: '' (txn:
tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

Seeing the following errors on storage logs...
object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] "PUT

/xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data"
408 - "PUT

http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data";
"txf3b4e5f677004474bbd2f-00536c30d1" "proxy-server 12241"
95.6405 "-"

It's success sometimes, but mostly 408 errors. I don't see any
other
logs for the transaction ID. or around these 408 errors in the log
files. Is this a disk timeout issue? These are only 1GB files
and normal
writes to files on these disks are quite fast.

The timeouts from the swift proxy files are...
root@bulkstore-112:~# grep -R timeout /etc/swift/*
/etc/swift/proxy-server.conf:client_timeout = 600
/etc/swift/proxy-server.conf:node_timeout = 600
/etc/swift/proxy-server.conf:recoverable_node_timeout = 600

Can someone help me troubleshoot this issue?

--
-Shyam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin

Re: [openstack-dev] (no subject)

2014-05-09 Thread Clay Gerrard
I thought those tracebacks only showed up with old versions of eventlet or
and eventlet_debug = true?

In my experience that normally indicates a client disconnect on a chucked
encoding transfer request (request w/o a content-length).  Do you know if
your clients are using transfer encoding chunked?

Are you seeing the 408 make it's way out to the client?  It wasn't clear to
me if you only see these tracebacks on the object-servers or in the proxy
logs as well?  Perhaps only one of the three disks involved in the PUT are
timing out and the client still gets a successful response?

As the disks fill up replication and auditing is going to consume more disk
resources - you may have to tune the concurrency and rate settings on those
daemons.  If the errors happen consistently you could try running with
background consistency processes temporarily disabled and rule out if
they're causing disk contention on your setup with your config.

-Clay


On Fri, May 9, 2014 at 8:54 AM, Ben Nemec  wrote:

> This is a development list, and your question sounds more usage-related.
>  Please ask your question on the users list: http://lists.openstack.org/
> cgi-bin/mailman/listinfo/openstack
>
> Thanks.
>
> -Ben
>
>
> On 05/09/2014 06:57 AM, Shyam Prasad N wrote:
>
>> Hi,
>>
>> I have a two node swift cluster receiving continuous traffic (mostly
>> overwrites for existing objects) of 1GB files each.
>>
>> Soon after the traffic started, I'm seeing the following traceback from
>> some transactions...
>> Traceback (most recent call last):
>>File "/home/eightkpc/swift/swift/proxy/controllers/obj.py", line 692,
>> in PUT
>>  chunk = next(data_source)
>>File "/home/eightkpc/swift/swift/proxy/controllers/obj.py", line 559,
>> in 
>>  data_source = iter(lambda: reader(self.app.client_chunk_size), '')
>>File "/home/eightkpc/swift/swift/common/utils.py", line 2362, in read
>>  chunk = self.wsgi_input.read(*args, **kwargs)
>>File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 147,
>> in read
>>  return self._chunked_read(self.rfile, length)
>>File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 137,
>> in _chunked_read
>>  self.chunk_length = int(rfile.readline().split(";", 1)[0], 16)
>> ValueError: invalid literal for int() with base 16: '' (txn:
>> tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)
>>
>> Seeing the following errors on storage logs...
>> object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] "PUT
>> /xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A
>> 30396AEF6B537B00.2.data"
>> 408 - "PUT
>> http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A
>> 30396AEF6B537B00.2.data"
>> "txf3b4e5f677004474bbd2f-00536c30d1" "proxy-server 12241" 95.6405 "-"
>>
>> It's success sometimes, but mostly 408 errors. I don't see any other
>> logs for the transaction ID. or around these 408 errors in the log
>> files. Is this a disk timeout issue? These are only 1GB files and normal
>> writes to files on these disks are quite fast.
>>
>> The timeouts from the swift proxy files are...
>> root@bulkstore-112:~# grep -R timeout /etc/swift/*
>> /etc/swift/proxy-server.conf:client_timeout = 600
>> /etc/swift/proxy-server.conf:node_timeout = 600
>> /etc/swift/proxy-server.conf:recoverable_node_timeout = 600
>>
>> Can someone help me troubleshoot this issue?
>>
>> --
>> -Shyam
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-05-09 Thread Ben Nemec
This is a development list, and your question sounds more usage-related. 
 Please ask your question on the users list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Thanks.

-Ben

On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

Hi,

I have a two node swift cluster receiving continuous traffic (mostly
overwrites for existing objects) of 1GB files each.

Soon after the traffic started, I'm seeing the following traceback from
some transactions...
Traceback (most recent call last):
   File "/home/eightkpc/swift/swift/proxy/controllers/obj.py", line 692,
in PUT
 chunk = next(data_source)
   File "/home/eightkpc/swift/swift/proxy/controllers/obj.py", line 559,
in 
 data_source = iter(lambda: reader(self.app.client_chunk_size), '')
   File "/home/eightkpc/swift/swift/common/utils.py", line 2362, in read
 chunk = self.wsgi_input.read(*args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 147,
in read
 return self._chunked_read(self.rfile, length)
   File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 137,
in _chunked_read
 self.chunk_length = int(rfile.readline().split(";", 1)[0], 16)
ValueError: invalid literal for int() with base 16: '' (txn:
tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

Seeing the following errors on storage logs...
object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] "PUT
/xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data"
408 - "PUT
http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data";
"txf3b4e5f677004474bbd2f-00536c30d1" "proxy-server 12241" 95.6405 "-"

It's success sometimes, but mostly 408 errors. I don't see any other
logs for the transaction ID. or around these 408 errors in the log
files. Is this a disk timeout issue? These are only 1GB files and normal
writes to files on these disks are quite fast.

The timeouts from the swift proxy files are...
root@bulkstore-112:~# grep -R timeout /etc/swift/*
/etc/swift/proxy-server.conf:client_timeout = 600
/etc/swift/proxy-server.conf:node_timeout = 600
/etc/swift/proxy-server.conf:recoverable_node_timeout = 600

Can someone help me troubleshoot this issue?

--
-Shyam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2014-05-09 Thread Shyam Prasad N
Hi,

I have a two node swift cluster receiving continuous traffic (mostly
overwrites for existing objects) of 1GB files each.

Soon after the traffic started, I'm seeing the following traceback from
some transactions...
Traceback (most recent call last):
  File "/home/eightkpc/swift/swift/proxy/controllers/obj.py", line 692, in
PUT
chunk = next(data_source)
  File "/home/eightkpc/swift/swift/proxy/controllers/obj.py", line 559, in

data_source = iter(lambda: reader(self.app.client_chunk_size), '')
  File "/home/eightkpc/swift/swift/common/utils.py", line 2362, in read
chunk = self.wsgi_input.read(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 147, in
read
return self._chunked_read(self.rfile, length)
  File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 137, in
_chunked_read
self.chunk_length = int(rfile.readline().split(";", 1)[0], 16)
ValueError: invalid literal for int() with base 16: '' (txn:
tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

Seeing the following errors on storage logs...
object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] "PUT
/xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data"
408 - "PUT
http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data";
"txf3b4e5f677004474bbd2f-00536c30d1" "proxy-server 12241" 95.6405 "-"

It's success sometimes, but mostly 408 errors. I don't see any other logs
for the transaction ID. or around these 408 errors in the log files. Is
this a disk timeout issue? These are only 1GB files and normal writes to
files on these disks are quite fast.

The timeouts from the swift proxy files are...
root@bulkstore-112:~# grep -R timeout /etc/swift/*
/etc/swift/proxy-server.conf:client_timeout = 600
/etc/swift/proxy-server.conf:node_timeout = 600
/etc/swift/proxy-server.conf:recoverable_node_timeout = 600

Can someone help me troubleshoot this issue?

-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-04-25 Thread Dmitriy Ukhlov
In my opinion it would be enough to read table schema
from stdio, then it is possible to use pipe for input from any stream


On Fri, Apr 25, 2014 at 6:25 AM, ANDREY OSTAPENKO (CS) <
andrey_ostape...@symantec.com> wrote:

> Hello, everyone!
>
> Now I'm starting to implement cli client for KeyValue Storage service
> MagnetoDB.
> I'm going to use heat approach for cli commands, e.g. heat stack-create
> --template-file ,
> because we have too many parameters to pass to the command.
> For example, table creation command:
>
> magnetodb create-table --description-file 
>
> File will contain json data, e.g.:
>
> {
> "table_name": "data",
> "attribute_definitions": [
> {
> "attribute_name": "Attr1",
> "attribute_type": "S"
> },
> {
> "attribute_name": "Attr2",
> "attribute_type": "S"
> },
> {
> "attribute_name": "Attr3",
> "attribute_type": "S"
> }
> ],
> "key_schema": [
> {
> "attribute_name": "Attr1",
> "key_type": "HASH"
> },
> {
> "attribute_name": "Attr2",
> "key_type": "RANGE"
> }
> ],
> "local_secondary_indexes": [
> {
> "index_name": "IndexName",
> "key_schema": [
> {
> "attribute_name": "Attr1",
> "key_type": "HASH"
> },
> {
> "attribute_name": "Attr3",
> "key_type": "RANGE"
> }
> ],
> "projection": {
> "projection_type": "ALL"
> }
> }
> ]
> }
>
> Blueprint:
> https://blueprints.launchpad.net/magnetodb/+spec/magnetodb-cli-client
>
> If you have any comments, please let me know.
>
> Best regards,
> Andrey Ostapenko




-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2014-04-23 Thread Trevor Vardeman
Hey,

I'm looking through the use-cases doc for review, and I'm confused about the 
6th one.  I'm familiar with HTTP cookie based session persistence, but to 
satisfy secure-traffic for this case would there be decryption of content, 
injection of the cookie, and then re-encryption?  Is there another session 
persistence type that solves this issue already?

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

-Trevor Vardeman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2014-03-26 Thread Wanghao (S)
Hi, all,



There is a use case: we have two nova components (call them nova A and nova B) 
and one cinder component. Attach a volume to an instance in nova A and then 
services of nova A become abnormal.

Because the volume also want to be used in nova B, so using cinder api "force 
detach volume" to free this volume. But when nova A is normal, nova can't 
detach this volume from instance by using nova api "detach volume" ,

as nova check the volume state must be "attached".



I think should we add "force detach" function to nova just like "attach" and 
"detach", because if using force detach volume in cinder, there is still some 
attach information in nova which can't be cleaned by using nova api "detach".



There is the BP link 
:https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova



Any suggestion is great. THX~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev