[openstack-dev] [Mistral] Action context passed to all action executions by default

2014-12-08 Thread W Chan
Renat,

Is there any reason why Mistral do not pass action context such as workflow
ID, execution ID, task ID, and etc to all of the action executions?  I
think it makes a lot of sense for that information to be made available by
default.  The action can then decide what to do with the information. It
doesn't require a special signature in the __init__ method of the Action
classes.  What do you think?

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-12-08 Thread Samuel Bercovici
Hi,

I agree that the most important thing is to conclude how status properties are 
being managed and handled so it will remain consistent as we move along.
I am fine with starting with a simple model and expending as need to be.
The L7 implementation is ready waiting for the rest of the model to get in so 
pool sharing under a listener is something that we should solve now.
I think that pool sharing under listeners connected to the same LB is more 
common that what you describe.

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, December 09, 2014 12:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use 
Cases that led us to adopt this.

So...  I should probably note that I see the case where a user actually shares 
object as being the exception. I expect that 90% of deployments will never need 
to share objects, except for a few cases--  those cases (of 1:N) relationships 
are:

* Loadbalancers must be able to have many Listeners
* When L7 functionality is introduced, L7 policies must be able to refer to the 
same Pool under a single Listener. (That is to say, sharing Pools under the 
scope of a single Listener makes sense, but only after L7 policies are 
introduced.)

I specifically see the following kind of sharing having near zero demand:

* Listeners shared across multiple loadbalancers
* Pools shared across multiple listeners
* Members shared across multiple pools

So, despite the fact that sharing doesn't make status reporting any more or 
less complex, I'm still in favor of starting with 1:1 relationships between 
most kinds of objects and then changing those to 1:N or M:N as we get user 
demand for this. As I said in my first response, allowing too many many to many 
relationships feels like a solution to a problem that doesn't really exist, and 
introduces a lot of unnecessary complexity.

Stephen

On Sun, Dec 7, 2014 at 11:43 PM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
+1


From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.net]
Sent: Friday, December 05, 2014 7:59 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use 
Cases that led us to adopt this.

German-- but the point is that sharing apparently has no effect on the number 
of permutations for status information. The only difference here is that 
without sharing it's more work for the user to maintain and modify trees of 
objects.

On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German 
mailto:german.eichber...@hp.com>> wrote:
Hi Brandon + Stephen,

Having all those permutations (and potentially testing them) made us lean 
against the sharing case in the first place. It’s just a lot of extra work for 
only a small number of our customers.

German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.net]
Sent: Thursday, December 04, 2014 9:17 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use 
Cases that led us to adopt this.

Hi Brandon,

Yeah, in your example, member1 could potentially have 8 different statuses (and 
this is a small example!)...  If that member starts flapping, it means that 
every time it flaps there are 8 notifications being passed upstream.

Note that this problem actually doesn't get any better if we're not sharing 
objects but are just duplicating them (ie. not sharing objects but the user 
makes references to the same back-end machine as 8 different members.)

To be honest, I don't see sharing entities at many levels like this being the 
rule for most of our installations-- maybe a few percentage points of 
installations will do an excessive sharing of members, but I doubt it. So 
really, even though reporting status like this is likely to generate a pretty 
big tree of data, I don't think this is actually a problem, eh. And I don't see 
sharing entities actually reducing the workload of what needs to happen behind 
the scenes. (It just allows us to conceal more of this work from the user.)

Stephen



On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan 
mailto:brandon.lo...@rackspace.com>> wrote:
Sorry it's taken me a while to respond to this.

So I wasn't thinking about this correctly.  I was afraid you would have
to pass in a full tree of parent child representations to /loadbalancers
to update anything a load balancer it is associated to (including down
to members).  However, after thinking about it, a user would just make
an association call on each object.  For Example, associate member1 with
pool1, associate pool1 with listener1, then associate loadbalancer1 with
listener1.  Updating is just as simple as updating each entity.

This does bring up another problem though.  If a listener can live on
many load balancers, and a pool can live on many liste

Re: [openstack-dev] [neutron] Changes to the core team

2014-12-08 Thread Gariganti, Sudhakar Babu
Congrats Kevin and Henry ☺.

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Monday, December 08, 2014 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Changes to the core team

On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery 
mailto:mest...@mestery.com>> wrote:
Now that we're in the thick of working hard on Kilo deliverables, I'd
like to make some changes to the neutron core team. Reviews are the
most important part of being a core reviewer, so we need to ensure
cores are doing reviews. The stats for the 180 day period [1] indicate
some changes are needed for cores who are no longer reviewing.

First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
neutron-core. Bob and Nachi have been core members for a while now.
They have contributed to Neutron over the years in reviews, code and
leading sub-teams. I'd like to thank them for all that they have done
over the years. I'd also like to propose that should they start
reviewing more going forward the core team looks to fast track them
back into neutron-core. But for now, their review stats place them
below the rest of the team for 180 days.

As part of the changes, I'd also like to propose two new members to
neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
been very active in reviews, meetings, and code for a while now. Henry
lead the DB team which fixed Neutron DB migrations during Juno. Kevin
has been actively working across all of Neutron, he's done some great
work on security fixes and stability fixes in particular. Their
comments in reviews are insightful and they have helped to onboard new
reviewers and taken the time to work with people on their patches.

Existing neutron cores, please vote +1/-1 for the addition of Henry
and Kevin to the core team.
Enough time has passed now, and Kevin and Henry have received enough +1 votes. 
So I'd like to welcome them to the core team!

Thanks,
Kyle

Thanks!
Kyle

[1] http://stackalytics.com/report/contribution/neutron-group/180

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] How to get past pxelinux.0 bootloader?

2014-12-08 Thread Peeyush Gupta
Hi all,

So, I have set up a devstack ironic setup for baremetal deployment. I
have been able to deploy a baremetal node successfully using
pxe_ipmitool driver. Now, I am trying to boot a server where I already
have a bootloader i.e. I don't need pxelinux to go and fetch kernel and
initrd images for me. I want to transfer them directly.

I checked out the code and figured out that there are dhcp opts
available, that are modified using pxe_utils.py, changing it didn't
help. Then I moved to ironic.conf, but here also I only see an option to
add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone
please help me with this situation? I don't want to go through
pxelinux.0 bootloader, I just directly want to transfer kernel and
initrd images.

Thanks.

-- 
Peeyush Gupta
gpeey...@linux.vnet.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Some questions about Ironic service

2014-12-08 Thread xianchaobo
Hello, all

I'm trying to install and configure Ironic service, something confused me.
I create two neutron networks, public network and private network.
Private network is used to deploy physical machines
Public network is used to provide floating ip.


(1) Private network type can be VLAN or VXLAN? (In install guide, the 
network type is flat)

(2) The network of deployed physical machines can be managed by neutron?

(3) Different tenants can have its own network to manage physical machines?

(4) Does the ironic provide some mechanism for deployed physical machines

to use storage such as shared storage,cinder volume?

Thanks,
XianChaobo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Using query string or request body to pass parameter

2014-12-08 Thread Alex Xu
Kevin, thanks for the info! I agree with you. RFC is the authority. use
payload in the DELETE isn't good way.

2014-12-09 7:58 GMT+08:00 Kevin L. Mitchell :

> On Tue, 2014-12-09 at 07:38 +0800, Alex Xu wrote:
> > Not sure all, nova is limited
> > at
> https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79
> > That under our control.
>
> It is, but the client frameworks aren't, and some of them prohibit
> sending a body with a DELETE request.  Further, RFC7231 has this to say
> about DELETE request bodies:
>
> A payload within a DELETE request message has no defined semantics;
> sending a payload body on a DELETE request might cause some
> existing
> implementations to reject the request.
>
> (§4.3.5)
>
> I think we have to conclude that, if we need a request body, we cannot
> use the DELETE method.  We can modify the operation, such as setting a
> "force" flag, with a query parameter on the URI, but a request body
> should be considered out of bounds with respect to DELETE.
>
> > Maybe not just ask question for delete, also for other method.
> >
> > 2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell <
> kevin.mitch...@rackspace.com>:
> > On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote:
> > > I wonder if we can use body in delete, currently , there isn't
> any
> > > case used in v2/v3 api.
> >
> > No, many frameworks raise an error if you try to include a body
> with a
> > DELETE request.
> > --
> > Kevin L. Mitchell 
> > Rackspace
>
> --
> Kevin L. Mitchell 
> Rackspace
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Query on creating multiple resources

2014-12-08 Thread Sushma Korati

Hi,


Thank you guys.


Yes I am able to do this with heat, but I faced issues while trying the same 
with mistral.

As suggested will try with the latest mistral branch. Thank you once again.


Regards,

Sushma





From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Tuesday, December 09, 2014 6:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources

Hi Sushma,

Did you explore Heat templates? As Zane mentioned you can do this via Heat 
template without writing any workflows.
Do you have any specific use cases which you can't solve with Heat template?

Create VM workflow was a demo example. Mistral potentially can be used by Heat 
or other orchestration tools to do actual interaction with API, but for user it 
might be easier to use Heat functionality.

Thanks,
Georgy

On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin 
mailto:nmakhot...@mirantis.com>> wrote:
Hi, Sushma!
Can we create multiple resources using a single task, like multiple keypairs or 
security-groups or networks etc?

Yes, we can. This feature is in the development now and it is considered as 
experimental - 
https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections

Just clone the last master branch from mistral.

You can specify "for-each" task property and provide the array of data to your 
workflow:

 

version: '2.0'

name: secgroup_actions

workflows:
  create_security_group:
type: direct
input:
  - array_with_names_and_descriptions

tasks:
  create_secgroups:

for-each:

  data: $.array_with_names_and_descriptions
action: nova.security_groups_create 
name={$.data.name} description={$.data.description}


On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter 
mailto:zbit...@redhat.com>> wrote:
On 08/12/14 09:41, Sushma Korati wrote:
Can we create multiple resources using a single task, like multiple
keypairs or security-groups or networks etc?

Define them in a Heat template and create the Heat stack as a single task.

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards,
Nikolay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][OVS] ovs-ofctl-to-python blueprint

2014-12-08 Thread YAMAMOTO Takashi
hi,

here's a blueprint to make OVS agent use Ryu to talk with OVS.

https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python
https://review.openstack.org/#/c/138980/  (kilo spec)

given that ML2/OVS is one of the most popular plugins and the proposal
has a few possible controversial points, i want to ask wider opinions.

- it introduces a new requirement for OVS agent. (Ryu)
- it makes OVS agent require newer OVS version than it currently does.
- what to do for xenapi support is still under investigation/research.
- possible security impact.

please comment on gerrit if you have any opinions.  thank you.

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-08 Thread trinath.soman...@freescale.com
Congratulation Kevin and Henry ☺

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Monday, December 08, 2014 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Changes to the core team

On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery 
mailto:mest...@mestery.com>> wrote:
Now that we're in the thick of working hard on Kilo deliverables, I'd
like to make some changes to the neutron core team. Reviews are the
most important part of being a core reviewer, so we need to ensure
cores are doing reviews. The stats for the 180 day period [1] indicate
some changes are needed for cores who are no longer reviewing.

First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
neutron-core. Bob and Nachi have been core members for a while now.
They have contributed to Neutron over the years in reviews, code and
leading sub-teams. I'd like to thank them for all that they have done
over the years. I'd also like to propose that should they start
reviewing more going forward the core team looks to fast track them
back into neutron-core. But for now, their review stats place them
below the rest of the team for 180 days.

As part of the changes, I'd also like to propose two new members to
neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
been very active in reviews, meetings, and code for a while now. Henry
lead the DB team which fixed Neutron DB migrations during Juno. Kevin
has been actively working across all of Neutron, he's done some great
work on security fixes and stability fixes in particular. Their
comments in reviews are insightful and they have helped to onboard new
reviewers and taken the time to work with people on their patches.

Existing neutron cores, please vote +1/-1 for the addition of Henry
and Kevin to the core team.
Enough time has passed now, and Kevin and Henry have received enough +1 votes. 
So I'd like to welcome them to the core team!

Thanks,
Kyle

Thanks!
Kyle

[1] http://stackalytics.com/report/contribution/neutron-group/180

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-08 Thread A, Keshava
Stephen,

Interesting to know what is “ACTIVE-ACTIVE topology of load balancing VMs”.
What is the scenario is it Service-VM (of NFV) or Tennant VM ?
Curious to know the background of this thoughts .

keshava


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, December 09, 2014 7:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

For what it's worth, I know that the Octavia project will need something which 
can do more advanced layer-3 networking in order to deliver and ACTIVE-ACTIVE 
topology of load balancing VMs / containers / machines. That's still a "down 
the road" feature for us, but it would be great to be able to do more advanced 
layer-3 networking in earlier releases of Octavia as well. (Without this, we 
might have to go through back doors to get Neutron to do what we need it to, 
and I'd rather avoid that.)

I'm definitely up for learning more about your proposal for this project, 
though I've not had any practical experience with Ryu yet. I would also like to 
see whether it's possible to do the sort of advanced layer-3 networking you've 
described without using OVS. (We have found that OVS tends to be not quite 
mature / stable enough for our needs and have moved most of our clouds to use 
ML2 / standard linux bridging.)

Carl:  I'll also take a look at the two gerrit reviews you've linked. Is this 
week's L3 meeting not happening then? (And man-- I wish it were an hour or two 
later in the day. Coming at y'all from PST timezone here.)

Stephen

On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin 
mailto:c...@ecbaldwin.net>> wrote:
Ryan,

I'll be traveling around the time of the L3 meeting this week.  My
flight leaves 40 minutes after the meeting and I might have trouble
attending.  It might be best to put it off a week or to plan another
time -- maybe Friday -- when we could discuss it in IRC or in a
Hangout.

Carl

On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
mailto:ryan.cleven...@rackspace.com>> wrote:
> Thanks for getting back Carl. I think we may be able to make this weeks
> meeting. Jason Kölker is the engineer doing all of the lifting on this side.
> Let me get with him to review what you all have so far and check our
> availability.
>
> 
>
> Ryan Clevenger
> Manager, Cloud Engineering - US
> m: 678.548.7261
> e: ryan.cleven...@rackspace.com
>
> 
> From: Carl Baldwin [c...@ecbaldwin.net]
> Sent: Sunday, December 07, 2014 4:04 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation
> and collaboration
>
> Ryan,
>
> I have been working with the L3 sub team in this direction.  Progress has
> been slow because of other priorities but we have made some.  I have written
> a blueprint detailing some changes needed to the code to enable the
> flexibility to one day run glaring ups on an l3 routed network [1].  Jaime
> has been working on one that integrates ryu (or other speakers) with neutron
> [2].  Dvr was also a step in this direction.
>
> I'd like to invite you to the l3 weekly meeting [3] to discuss further.  I'm
> very happy to see interest in this area and have someone new to collaborate.
>
> Carl
>
> [1] https://review.openstack.org/#/c/88619/
> [2] https://review.openstack.org/#/c/125401/
> [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
>
> On Dec 3, 2014 4:04 PM, "Ryan Clevenger" 
> mailto:ryan.cleven...@rackspace.com>>
> wrote:
>>
>> Hi,
>>
>> At Rackspace, we have a need to create a higher level networking service
>> primarily for the purpose of creating a Floating IP solution in our
>> environment. The current solutions for Floating IPs, being tied to plugin
>> implementations, does not meet our needs at scale for the following reasons:
>>
>> 1. Limited endpoint H/A mainly targeting failover only and not
>> multi-active endpoints,
>> 2. Lack of noisy neighbor and DDOS mitigation,
>> 3. IP fragmentation (with cells, public connectivity is terminated inside
>> each cell leading to fragmentation and IP stranding when cell CPU/Memory use
>> doesn't line up with allocated IP blocks. Abstracting public connectivity
>> away from nova installations allows for much more efficient use of those
>> precious IPv4 blocks).
>> 4. Diversity in transit (multiple encapsulation and transit types on a per
>> floating ip basis).
>>
>> We realize that network infrastructures are often unique and such a
>> solution would likely diverge from provider to provider. However, we would
>> love to collaborate with the community to see if such a project could be
>> built that would meet the needs of providers at scale. We believe that, at
>> its core, this solution would boil down to terminating north<->south traffic
>> temporarily at a massively horizontally scalable

[openstack-dev] [Mistral] Query on creating multiple resources

2014-12-08 Thread Sushma Korati
Hello All,


Can we create multiple resources using a single task, like multiple keypairs or 
security-groups or networks etc?


I am trying to extend the existing "create_vm" workflow, such that it accepts a 
list of security groups. In the workflow, before create_vm I am trying to 
create the security group if it does not exist.


Just to test the security group functionality individually I wrote a sample 
workflow:



version: '2.0'

name: secgroup_actions

workflows:
  create_security_group:
type: direct
input:
  - name
  - description

tasks:
  create_secgroups:
action: nova.security_groups_create name={$.name} 
description={$.description}


This is a straight forward workflow, but I am unable figure out how to pass 
multiple security groups to the above workflow.

I tried passing multiple dicts in context file but it did not work.

--

{
  "name": "secgrp1",
  "description": "using mistral"
},
{
  "name": "secgrp2",
  "description": "using mistral"
}

-

Is there any way to modify this workflow such that it creates more than one 
security group?

Please help.


Regards,

Sushma


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 12/9

2014-12-08 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)





1) Status on cleanup work - 
https://wiki.openstack.org/wiki/Gantt/kilo


--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-12-08 Thread Li Ma
Hi all, I tried to deploy zeromq by devstack and it definitely failed 
with lots of problems, like dependencies, topics, matchmaker setup, etc. 
I've already registered a blueprint for devstack-zeromq [1].


Besides, I suggest to build a wiki page in order to trace all the 
workitems related with ZeroMQ. The general sections may be [Why ZeroMQ], 
[Current Bugs & Reviews], [Future Plan & Blueprints], [Discussions], 
[Resources], etc.


Any comments?

[1] https://blueprints.launchpad.net/devstack/+spec/zeromq

cheers,
Li Ma

On 2014/11/18 21:46, James Page wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 18/11/14 00:55, Denis Makogon wrote:


So if zmq driver support in devstack is fixed, we can easily add a
new job to run them in the same way.


Btw this is a good question. I will take look at current state of
zmq in devstack.

I don't think its that far off and its broken rather than missing -
the rpc backend code needs updating to use oslo.messaging rather than
project specific copies of the rpc common codebase (pre oslo).
Devstack should be able to run with the local matchmaker in most
scenarios but it looks like there was support for the redis matchmaker
as well.

If you could take some time to fixup that would be awesome!

- -- 
James Page

Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg
cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo
45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h
Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir
aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe
/cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40
TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB
P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb
4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD
bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu
Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB
tRDFb67u28jxnIXR16g=
=+k0M
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Compute][Nova] Mid-cycle meetup details for kilo

2014-12-08 Thread Michael Still
Wow, now we're up to 75% "sold" in just five hours. So... If you're
stalling on registering please don't, as it sounds like I might need
to ask the venue for more seats.

Thanks,
Michael

On Tue, Dec 9, 2014 at 9:10 AM, Michael Still  wrote:
> Just a reminder that registration for the Nova mid-cycle is now open.
> We're currently 50% "sold", so early signup will help us work out if
> we need to add more seats or not.
>
> https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039
>
> Thanks,
> Michael
>
> On Thu, Dec 4, 2014 at 10:18 AM, Michael Still  wrote:
>> Sigh, sorry. It is of course the Kilo meetup:
>>
>> https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039
>>
>> Michael
>>
>> On Thu, Dec 4, 2014 at 10:16 AM, Michael Still  wrote:
>>> I've just created the signup page for this event. Its here:
>>>
>>> https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-14767182039
>>>
>>> Cheers,
>>> Michael
>>>
>>> On Wed, Oct 15, 2014 at 3:45 PM, Michael Still  wrote:
 Hi.

 I am pleased to announce details for the Kilo Compute mid-cycle
 meetup, but first some background about how we got here.

 Two companies actively involved in OpenStack came forward with offers
 to host the Compute meetup. However, one of those companies has
 gracefully decided to wait until the L release because of the cold
 conditions are their proposed location (think several feet of snow).

 So instead, we're left with California!

 The mid-cycle meetup will be from 26 to 28 January 2015, at the VMWare
 offices in Palo Alto California.

 Thanks for VMWare for stepping up and offering to host. It sure does
 make my life easy.

 More details will be forthcoming closer to the event, but I wanted to
 give people as much notice as possible about dates and location so
 they can start negotiating travel if they want to come.

 Cheers,
 Michael

 --
 Rackspace Australia
>>>
>>>
>>>
>>> --
>>> Rackspace Australia
>>
>>
>>
>> --
>> Rackspace Australia
>
>
>
> --
> Rackspace Australia



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state?

2014-12-08 Thread Danny Choi (dannchoi)
Both “delete” and “force-delete” did not work for me; they failed to remove the 
VM.

Danny


Date: Sun, 7 Dec 2014 21:17:30 +0530
From: foss geek mailto:thefossg...@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [qa] How to delete a VM which is in ERROR
state?
Message-ID:
mailto:cadxhynxtvakcg58s2_ym5koozm6yufk9urok7wxtceya7oa...@mail.gmail.com>>
Content-Type: text/plain; charset="utf-8"

Also try with nova force-delete after reset:

$ nova help force-delete
usage: nova force-delete 

Force delete a server.

Positional arguments:
Name or ID of server.

--
Thanks & Regards
E-Mail: thefossg...@gmail.com
IRC: neophy
Blog : http://lmohanphy.livejournal.com/


On Sun, Dec 7, 2014 at 9:10 PM, foss geek 
mailto:thefossg...@gmail.com>> wrote:

Have you tried to delete after reset?

# nova reset-state --active  

#  nova delete 

It works well for me if the VM state is error state.


--
Thanks & Regards
E-Mail: thefossg...@gmail.com
IRC: neophy
Blog : http://lmohanphy.livejournal.com/



On Sun, Dec 7, 2014 at 7:17 PM, Danny Choi (dannchoi) 
mailto:dannc...@cisco.com>>
wrote:

  That does not work.

  It put the VM in ACTIVE Status, but in NOSTATE Power State.

  Subsequent delete still won?t remove the VM.


+--+--+++-++

| ID   | Name
 | Status | Task State | Power State | Networks   |


+--+--+++-++
| 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 |
cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ACTIVE | -  |
NOSTATE ||


  Regards,
Danny

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Event Subscription

2014-12-08 Thread W Chan
Renat,

On sending events to an "exchange", I mean an exchange on the transport
(i.e. rabbitMQ exchange
https://www.rabbitmq.com/tutorials/amqp-concepts.html).  On implementation
we can probably explore the notification feature in oslo.messaging.  But on
second thought, this would limit the consumers to trusted subsystems or
services though.  If we want the event consumers to be any 3rd party,
including untrusted, then maybe we should keep it as HTTP calls.

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-08 Thread Stephen Balukoff
For what it's worth, I know that the Octavia project will need something
which can do more advanced layer-3 networking in order to deliver and
ACTIVE-ACTIVE topology of load balancing VMs / containers / machines.
That's still a "down the road" feature for us, but it would be great to be
able to do more advanced layer-3 networking in earlier releases of Octavia
as well. (Without this, we might have to go through back doors to get
Neutron to do what we need it to, and I'd rather avoid that.)

I'm definitely up for learning more about your proposal for this project,
though I've not had any practical experience with Ryu yet. I would also
like to see whether it's possible to do the sort of advanced layer-3
networking you've described without using OVS. (We have found that OVS
tends to be not quite mature / stable enough for our needs and have moved
most of our clouds to use ML2 / standard linux bridging.)

Carl:  I'll also take a look at the two gerrit reviews you've linked. Is
this week's L3 meeting not happening then? (And man-- I wish it were an
hour or two later in the day. Coming at y'all from PST timezone here.)

Stephen

On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin  wrote:

> Ryan,
>
> I'll be traveling around the time of the L3 meeting this week.  My
> flight leaves 40 minutes after the meeting and I might have trouble
> attending.  It might be best to put it off a week or to plan another
> time -- maybe Friday -- when we could discuss it in IRC or in a
> Hangout.
>
> Carl
>
> On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
>  wrote:
> > Thanks for getting back Carl. I think we may be able to make this weeks
> > meeting. Jason Kölker is the engineer doing all of the lifting on this
> side.
> > Let me get with him to review what you all have so far and check our
> > availability.
> >
> > 
> >
> > Ryan Clevenger
> > Manager, Cloud Engineering - US
> > m: 678.548.7261
> > e: ryan.cleven...@rackspace.com
> >
> > 
> > From: Carl Baldwin [c...@ecbaldwin.net]
> > Sent: Sunday, December 07, 2014 4:04 PM
> > To: OpenStack Development Mailing List
> > Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea
> solicitation
> > and collaboration
> >
> > Ryan,
> >
> > I have been working with the L3 sub team in this direction.  Progress has
> > been slow because of other priorities but we have made some.  I have
> written
> > a blueprint detailing some changes needed to the code to enable the
> > flexibility to one day run glaring ups on an l3 routed network [1].
> Jaime
> > has been working on one that integrates ryu (or other speakers) with
> neutron
> > [2].  Dvr was also a step in this direction.
> >
> > I'd like to invite you to the l3 weekly meeting [3] to discuss further.
> I'm
> > very happy to see interest in this area and have someone new to
> collaborate.
> >
> > Carl
> >
> > [1] https://review.openstack.org/#/c/88619/
> > [2] https://review.openstack.org/#/c/125401/
> > [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
> >
> > On Dec 3, 2014 4:04 PM, "Ryan Clevenger" 
> > wrote:
> >>
> >> Hi,
> >>
> >> At Rackspace, we have a need to create a higher level networking service
> >> primarily for the purpose of creating a Floating IP solution in our
> >> environment. The current solutions for Floating IPs, being tied to
> plugin
> >> implementations, does not meet our needs at scale for the following
> reasons:
> >>
> >> 1. Limited endpoint H/A mainly targeting failover only and not
> >> multi-active endpoints,
> >> 2. Lack of noisy neighbor and DDOS mitigation,
> >> 3. IP fragmentation (with cells, public connectivity is terminated
> inside
> >> each cell leading to fragmentation and IP stranding when cell
> CPU/Memory use
> >> doesn't line up with allocated IP blocks. Abstracting public
> connectivity
> >> away from nova installations allows for much more efficient use of those
> >> precious IPv4 blocks).
> >> 4. Diversity in transit (multiple encapsulation and transit types on a
> per
> >> floating ip basis).
> >>
> >> We realize that network infrastructures are often unique and such a
> >> solution would likely diverge from provider to provider. However, we
> would
> >> love to collaborate with the community to see if such a project could be
> >> built that would meet the needs of providers at scale. We believe that,
> at
> >> its core, this solution would boil down to terminating north<->south
> traffic
> >> temporarily at a massively horizontally scalable centralized core and
> then
> >> encapsulating traffic east<->west to a specific host based on the
> >> association setup via the current L3 router's extension's 'floatingips'
> >> resource.
> >>
> >> Our current idea, involves using Open vSwitch for header rewriting and
> >> tunnel encapsulation combined with a set of Ryu applications for
> management:
> >>
> >> https://i.imgur.com/bivSdcC.png
> >>
> >> The Ryu application uses Ryu's BGP support to announce up 

[openstack-dev] [keystone][all] Max Complexity Check Considered Harmful

2014-12-08 Thread Brant Knudson
Not too long ago projects added a maximum complexity check to tox.ini, for
example keystone has "max-complexity=24". Seemed like a good idea at the
time, but in a recent attempt to lower the maximum complexity check in
keystone[1][2], I found that the maximum complexity check can actually lead
to less understandable code. This is because the check includes an embedded
function's "complexity" in the function that it's in.

The way I would have lowered the complexity of the function in keystone is
to extract the complex part into a new function. This can make the existing
function much easier to understand for all the reasons that one defines a
function for code. Since this new function is obviously only called from
the function it's currently in, it makes sense to keep the new function
inside the existing function. It's simpler to think about an embedded
function because then you know it's only called from one place. The problem
is, because of the existing complexity check behavior, this doesn't lower
the "complexity" according to the complexity check, so you wind up putting
the function as a new top-level, and now a reader is has to assume that the
function could be called from anywhere and has to be much more cautious
about changes to the function.

Since the complexity check can lead to code that's harder to understand, it
must be considered harmful and should be removed, at least until the
incorrect behavior is corrected.

[1] https://review.openstack.org/#/c/139835/
[2] https://review.openstack.org/#/c/139836/
[3] https://review.openstack.org/#/c/140188/

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-08 Thread Jim Rollenhagen


On December 8, 2014 2:23:58 PM PST, Devananda van der Veen 
 wrote:
>I'd like to raise this topic for a wider discussion outside of the
>hallway
>track and code reviews, where it has thus far mostly remained.
>
>In previous discussions, my understanding has been that the Fuel team
>sought to use Ironic to manage "pets" rather than "cattle" - and doing
>so
>required extending the API and the project's functionality in ways that
>no
>one else on the core team agreed with. Perhaps that understanding was
>wrong
>(or perhaps not), but in any case, there is now a proposal to add a
>FuelAgent driver to Ironic. The proposal claims this would meet that
>teams'
>needs without requiring changes to the core of Ironic.
>
>https://review.openstack.org/#/c/138115/

I think it's clear from the review that I share the opinions expressed in this 
email. 

That said (and hopefully without derailing the thread too much), I'm curious 
how this driver could do software RAID or LVM without modifying Ironic's API or 
data model. How would the agent know how these should be built? How would an 
operator or user tell Ironic what the disk/partition/volume layout would look 
like?

And before it's said - no, I don't think vendor passthru API calls are an 
appropriate answer here. 

// jim

>
>The Problem Description section calls out four things, which have all
>been
>discussed previously (some are here [0]). I would like to address each
>one,
>invite discussion on whether or not these are, in fact, problems facing
>Ironic (not whether they are problems for someone, somewhere), and then
>ask
>why these necessitate a new driver be added to the project.
>
>
>They are, for reference:
>
>1. limited partition support
>
>2. no software RAID support
>
>3. no LVM support
>
>4. no support for hardware that lacks a BMC
>
>#1.
>
>When deploying a partition image (eg, QCOW format), Ironic's PXE deploy
>driver performs only the minimal partitioning necessary to fulfill its
>mission as an OpenStack service: respect the user's request for root,
>swap,
>and ephemeral partition sizes. When deploying a whole-disk image,
>Ironic
>does not perform any partitioning -- such is left up to the operator
>who
>created the disk image.
>
>Support for arbitrarily complex partition layouts is not required by,
>nor
>does it facilitate, the goal of provisioning physical servers via a
>common
>cloud API. Additionally, as with #3 below, nothing prevents a user from
>creating more partitions in unallocated disk space once they have
>access to
>their instance. Therefor, I don't see how Ironic's minimal support for
>partitioning is a problem for the project.
>
>#2.
>
>There is no support for defining a RAID in Ironic today, at all,
>whether
>software or hardware. Several proposals were floated last cycle; one is
>under review right now for DRAC support [1], and there are multiple
>call
>outs for RAID building in the state machine mega-spec [2]. Any such
>support
>for hardware RAID will necessarily be abstract enough to support
>multiple
>hardware vendor's driver implementations and both in-band creation (via
>IPA) and out-of-band creation (via vendor tools).
>
>Given the above, it may become possible to add software RAID support to
>IPA
>in the future, under the same abstraction. This would closely tie the
>deploy agent to the images it deploys (the latter image's kernel would
>be
>dependent upon a software RAID built by the former), but this would
>necessarily be true for the proposed FuelAgent as well.
>
>I don't see this as a compelling reason to add a new driver to the
>project.
>Instead, we should (plan to) add support for software RAID to the
>deploy
>agent which is already part of the project.
>
>#3.
>
>LVM volumes can easily be added by a user (after provisioning) within
>unallocated disk space for non-root partitions. I have not yet seen a
>compelling argument for doing this within the provisioning phase.
>
>#4.
>
>There are already in-tree drivers [3] [4] [5] which do not require a
>BMC.
>One of these uses SSH to connect and run pre-determined commands. Like
>the
>spec proposal, which states at line 122, "Control via SSH access
>feature
>intended only for experiments in non-production environment," the
>current
>SSHPowerDriver is only meant for testing environments. We could
>probably
>extend this driver to do what the FuelAgent spec proposes, as far as
>remote
>power control for cheap always-on hardware in testing environments with
>a
>pre-shared key.
>
>(And if anyone wonders about a use case for Ironic without external
>power
>control ... I can only think of one situation where I would rationally
>ever
>want to have a control-plane agent running inside a user-instance: I am
>both the operator and the only user of the cloud.)
>
>
>
>
>In summary, as far as I can tell, all of the problem statements upon
>which
>the FuelAgent proposal are based are solvable through incremental
>changes
>in existing drivers, or out of scope for the project ent

Re: [openstack-dev] [nova] Adding temporary code to nova to work around bugs in system utilities

2014-12-08 Thread Tony Breeds
On Mon, Dec 08, 2014 at 03:19:40PM -0500, Jay Pipes wrote:

> I reviewed the patch. I don't mind the idea of a [workarounds] section of
> configuration options, but I had an issue with where that code was executed.

Thanks.

Replied.


 
> I think it would be fine to have a [workarounds] config section for just
> this purpose.

Okay good to know.  Again thanks.
 
Yours Tony.


pgp3Y_ZRgvK0m.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] services split starting today

2014-12-08 Thread Kyle Mestery
Reminder: Neutron is still frozen for commits while we work through the
services split. We hope to be have this done tomorrow sometime so we can
un-freeze neutron.

Thanks!
Kyle

On Mon, Dec 8, 2014 at 10:19 AM, Doug Wiegley  wrote:

> To all neutron cores,
>
> Please do not approve any gerrit reviews for advanced services code for
> the next few days.  We will post again when those reviews can resume.
>
> Thanks,
> Doug
>
>
>
> On 12/8/14, 8:49 AM, "Doug Wiegley"  wrote:
>
> >Hi all,
> >
> >The neutron advanced services split is starting today at 9am PDT, as
> >described here:
> >
> >https://review.openstack.org/#/c/136835/
> >
> >
> >.. The remove change from neutron can be seen here:
> >
> >https://review.openstack.org/#/c/139901/
> >
> >
> >.. While the new repos are being sorted out, advanced services will be
> >broken, and services tempest tests will be disabled.  Either grab Juno, or
> >an earlier rev of neutron.
> >
> >The new repos are: neutron-lbaas, neutron-fwaas, neutron-vpnaas.
> >
> >Thanks,
> >Doug
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Query on creating multiple resources

2014-12-08 Thread Georgy Okrokvertskhov
Hi Sushma,

Did you explore Heat templates? As Zane mentioned you can do this via Heat
template without writing any workflows.
Do you have any specific use cases which you can't solve with Heat template?

Create VM workflow was a demo example. Mistral potentially can be used by
Heat or other orchestration tools to do actual interaction with API, but
for user it might be easier to use Heat functionality.

Thanks,
Georgy

On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin 
wrote:

> Hi, Sushma!
>
> Can we create multiple resources using a single task, like multiple
>> keypairs or security-groups or networks etc?
>
>
> Yes, we can. This feature is in the development now and it is considered
> as experimental -
> https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections
>
> Just clone the last master branch from mistral.
>
> You can specify "for-each" task property and provide the array of data to
> your workflow:
>
>  
>
> version: '2.0'
>
> name: secgroup_actions
>
> workflows:
>   create_security_group:
> type: direct
> input:
>   - array_with_names_and_descriptions
>
> tasks:
>   create_secgroups:
>
> for-each:
>
>   data: $.array_with_names_and_descriptions
> action: nova.security_groups_create name={$.data.name}
> description={$.data.description}
> 
>
> On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter  wrote:
>
>> On 08/12/14 09:41, Sushma Korati wrote:
>>
>>> Can we create multiple resources using a single task, like multiple
>>> keypairs or security-groups or networks etc?
>>>
>>
>> Define them in a Heat template and create the Heat stack as a single task.
>>
>> - ZB
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best Regards,
> Nikolay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Using query string or request body to pass parameter

2014-12-08 Thread Kevin L. Mitchell
On Tue, 2014-12-09 at 07:38 +0800, Alex Xu wrote:
> Not sure all, nova is limited
> at 
> https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79
> That under our control.

It is, but the client frameworks aren't, and some of them prohibit
sending a body with a DELETE request.  Further, RFC7231 has this to say
about DELETE request bodies:

A payload within a DELETE request message has no defined semantics;
sending a payload body on a DELETE request might cause some existing
implementations to reject the request.

(§4.3.5)

I think we have to conclude that, if we need a request body, we cannot
use the DELETE method.  We can modify the operation, such as setting a
"force" flag, with a query parameter on the URI, but a request body
should be considered out of bounds with respect to DELETE.

> Maybe not just ask question for delete, also for other method.
> 
> 2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell :
> On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote:
> > I wonder if we can use body in delete, currently , there isn't any
> > case used in v2/v3 api.
> 
> No, many frameworks raise an error if you try to include a body with a
> DELETE request.
> --
> Kevin L. Mitchell 
> Rackspace

-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-12-08 Thread Anne Gentle
On Mon, Dec 8, 2014 at 1:06 PM, Adam Young  wrote:

> Isn't this what the API repos are for?  Should EG the Keystone schemes be
> served from
>
> https://github.com/openstack/identity-api/
>
>
>
The -api repos will go away once we have the merges completed for
replacing with the -specs repo info.

I wondered if anyone would draw a connection to the API schemas. We haven't
made a plan to maintain XSDs for every API and many teams don't have XSDs
available. So far only Identity, Compute, and Databases have made XSD
files. Should those also go into the specs repository?

Thanks,
Anne


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Using query string or request body to pass parameter

2014-12-08 Thread Alex Xu
Not sure all, nova is limited at
https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79
That under our control.

Maybe not just ask question for delete, also for other method.

2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell :

> On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote:
> > I wonder if we can use body in delete, currently , there isn't any
> > case used in v2/v3 api.
>
> No, many frameworks raise an error if you try to include a body with a
> DELETE request.
> --
> Kevin L. Mitchell 
> Rackspace
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] OSAAA-Policy

2014-12-08 Thread Morgan Fainberg
As a quick note, the OpenStack Identity Program was not renamed to AAA (based 
upon the discussion with the TC) but increased scope to adopt Audit on top of 
the already included scope of Authorization and Authentication.

Cheers,
Morgan

-- 
Morgan Fainberg
On December 8, 2014 at 5:05:51 PM, Morgan Fainberg (morgan.fainb...@gmail.com) 
wrote:

I agree that this library should not have “Keystone” in the name. This is more 
along the lines of pycadf, something that is housed under the OpenStack 
Identity Program but it is more interesting for general use-case than 
exclusively something that is tied to Keystone specifically.

Cheers,
Morgan

-- 
Morgan Fainberg

On December 8, 2014 at 4:55:20 PM, Adam Young (ayo...@redhat.com) wrote:

The Policy libraray has been nominated for promotion from Oslo
incubator. The Keystone team was formally known as the Identity
Program, but now is Authentication, Authorization, and Audit, or AAA.

Does the prefeix OSAAA for the library make sense? It should not be
Keystone-policy.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Mid-Cycle Meetup Dates/Time/Location

2014-12-08 Thread Morgan Fainberg
As promised, we have an update for venue, recommended hotels, and an RSVP form.

I want to thank Geekdom and Rackspace for helping to put together and host the 
Keystone Midcycle.

All updated information can be found at: 
https://www.morganfainberg.com/blog/2014/11/18/keystone-hackathon-kilo/

Cheers,
Morgan

-- 
Morgan Fainberg
On November 18, 2014 at 2:57:58 PM, Morgan Fainberg (morgan.fainb...@gmail.com) 
wrote:

I am happy to announce a bunch of the information for the Keystone mid-cycle 
meetup. The selection of dates, location, etc is based upon the great feedback 
I received from the earlier poll. Currently the only thing left up in the air 
is the specific venue and recommended hotel(s).

Location: San Antonio, TX
Dates: January 19, 20, 21 (~2 weeks prior to Kilo Milestone 2).
Venue: TBD (we have a couple options in San Antonio, and will provide an update 
as soon as it’s all confirmed)
Recommended Hotels: TBD (we are also working to get a Hotel discount again like 
last time, the recommended hotel will be based upon the final venue).

I will be keeping the following page: 
https://www.morganfainberg.com/blog/2014/11/18/keystone-hackathon-kilo/ 
up-to-date with hotel recommendations, venue specific details, etc. I expect to 
have the Venue and Hotel recommendations ready shortly (full RSVP form will be 
sent out as well once the venue and hotel are confirmed).

I look forward to seeing everyone in January!

Cheers,
Morgan Fainberg___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] mid-cycle arrival time Tuesday 12-9-2014

2014-12-08 Thread Kyle Mestery
Folks, per a request from Jun, for tomorrow's mid-cycle, if you could all
arrive no earlier than 8:45, that would be great. Since we're in the
smaller meeting rooms and/or cafeteria area tomorrow, this will allow Jun
and the other Adobe hosts to arrive in time and set up for us.

Thanks!
Kyle
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] OSAAA-Policy

2014-12-08 Thread Morgan Fainberg
I agree that this library should not have “Keystone” in the name. This is more 
along the lines of pycadf, something that is housed under the OpenStack 
Identity Program but it is more interesting for general use-case than 
exclusively something that is tied to Keystone specifically.

Cheers,
Morgan

-- 
Morgan Fainberg

On December 8, 2014 at 4:55:20 PM, Adam Young (ayo...@redhat.com) wrote:

The Policy libraray has been nominated for promotion from Oslo  
incubator. The Keystone team was formally known as the Identity  
Program, but now is Authentication, Authorization, and Audit, or AAA.  

Does the prefeix OSAAA for the library make sense? It should not be  
Keystone-policy.  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Abandoned patches for neutron and python-neutronclient

2014-12-08 Thread Kyle Mestery
As part of a broader cleanup I've been doing today, I went and abandoned a
bunch of patches in both the neutron and python-neutronclient gerrit queues
today. All of these were more than 2 months old. If you plan to continue
working on these, please activate them again and propose new changes. But
this should help cleanup the queues a bit.

Thanks!
Kyle
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] OSAAA-Policy

2014-12-08 Thread Adam Young
The Policy libraray has been nominated for promotion from Oslo 
incubator.  The Keystone team was formally known as the Identity 
Program, but now is Authentication, Authorization, and Audit, or AAA.


Does the prefeix OSAAA for the library make sense?  It should not be 
Keystone-policy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Telco Work Group][Ecosystem & Collateral Team] Meeting information

2014-12-08 Thread Barrett, Carol L
The Ecosystem and Collateral team of the Telco Work Group is meeting on Tuesday 
12/9 at 8:00 Pacific Time.

If you're interested in collaborating to accelerate Telco adoption of OpenStack 
through ecosystem engagements and development of collateral (case studies, 
reference architectures, etc), pls join.

Call details: Access: (888) 875-9370, Bridge: 3; PC: 7053780
Etherpad for meeting notes: 
https://etherpad.openstack.org/p/12_9_TWG_Ecosystem_and_Collateral

Thanks
Carol

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support

2014-12-08 Thread Kashyap Chamarthy
On Mon, Dec 08, 2014 at 09:12:24PM +, Jeremy Stanley wrote:
> On 2014-12-08 11:45:36 +0100 (+0100), Kashyap Chamarthy wrote:
> > As Dan Berrangé noted, it's nearly impossible to reproduce this issue
> > independently outside of OpenStack Gating environment. I brought this up
> > at the recently concluded KVM Forum earlier this October. To debug this
> > any further, one of the QEMU block layer developers asked if we can get
> > QEMU instance running on Gate run under `gdb` (IIRC, danpb suggested
> > this too, previously) to get further tracing details.
> 
> We document thoroughly how to reproduce the environments we use for
> testing OpenStack. 

Yep, documentation is appreciated.

> There's nothing rarified about "a Gate run" that anyone with access to
> a public cloud provider would be unable to reproduce, save being able
> to run it over and over enough times to expose less frequent failures.

Sure. To be fair, this was actually tried. At the risk of over
discussing the topic, allow me to provide a bit more context, quoting
Dan's email from an old thread[1] ("Thoughts on the patch test failure
rate and moving forward" Jul 23, 2014) here for convenience:

"In some of the harder gate bugs I've looked at (especially the
infamous 'live snapshot' timeout bug), it has been damn hard to
actually figure out what's wrong. AFAIK, no one has ever been able
to reproduce it outside of the gate infrastructure. I've even gone
as far as setting up identical Ubuntu VMs to the ones used in the
gate on a local cloud, and running the tempest tests multiple times,
but still can't reproduce what happens on the gate machines
themselves :-( As such we're relying on code inspection and the
collected log messages to try and figure out what might be wrong.

The gate collects alot of info and publishes it, but in this case I
have found the published logs to be insufficient - I needed to get
the more verbose libvirtd.log file. devstack has the ability to turn
this on via an environment variable, but it is disabled by default
because it would add 3% to the total size of logs collected per gate
job.

There's no way for me to get that environment variable for devstack
turned on for a specific review I want to test with. In the end I
uploaded a change to nova which abused rootwrap to elevate
privileges, install extra deb packages, reconfigure libvirtd logging
and restart the libvirtd daemon.

   
https://review.openstack.org/#/c/103066/11/etc/nova/rootwrap.d/compute.filters
   https://review.openstack.org/#/c/103066/11/nova/virt/libvirt/driver.py

My next attack is to build a custom QEMU binary and hack nova
further so that it can download my custom QEMU binary from a website
onto the gate machine and run the test with it. Failing that I'm
going to be hacking things to try to attach to QEMU in the gate with
GDB and get stack traces.  Anything is doable thanks to rootwrap
giving us a way to elevate privileges from Nova, but it is a
somewhat tedious approach."


   [1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041148.html

To add to the above, from the bug, you can find in one of the plenty of
invocations, the above issue _was_ reproduced once, albiet with
questionable likelihood (details in the bug).

So, it's not that what you're suggesting was never tried. But, from the
above, you can clearly see what kind of convoluted methods you need to
resort to.

One concrete point from the above: it'd be very useful to have an env
variable that can be toggled to enable libvirt/QEMU run under `gdb` for
$REVIEW.

(Sure, it's a patch that needs to be worked on. . .)

[. . .]

> The QA team tries very hard to make our integration testing
> environment as closely as possible mimic real-world deployment
> configurations. If these sorts of bugs emerge more often because of,
> for example, resource constraints in the test environment then it
> should be entirely likely they'd also be seen in production with the
> same frequency if run on similarly constrained equipment. And as we've
> observed in the past, any code path we stop testing quickly
> accumulates new bugs that go unnoticed until they impact someone's
> production environment at 3am.

I realize you're raising the point that it should not be taken lightly
-- hope the context provided in this email demonstrates that it's not
the case.


PS: FWIW, I do enable this codepath in my test environments (sure, it's
not *representative*), but I'm yet to reproduce the bug.


-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] HA issues

2014-12-08 Thread John Griffith
On Mon, Dec 8, 2014 at 8:18 AM, Dulko, Michal  wrote:
> Hi all!
>
>
>
> At the summit during crossproject HA session there were multiple Cinder
> issues mentioned. These can be found in this etherpad:
> https://etherpad.openstack.org/p/kilo-crossproject-ha-integration
>
>
>
> Is there any ongoing effort to fix these issues? Is there an idea how to
> approach any of them?
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Thanks for the nudge on this, personally I hadn't seen this.  So the
items are pretty vague, there are def plans to try and address a
number of race conditions etc.  I'm not aware of any specific plans to
focus on HA from this perspective, or anybody stepping up to work on
it but certainly would be great for somebody to dig in and start
flushing this out.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Fuel agent proposal

2014-12-08 Thread Devananda van der Veen
I'd like to raise this topic for a wider discussion outside of the hallway
track and code reviews, where it has thus far mostly remained.

In previous discussions, my understanding has been that the Fuel team
sought to use Ironic to manage "pets" rather than "cattle" - and doing so
required extending the API and the project's functionality in ways that no
one else on the core team agreed with. Perhaps that understanding was wrong
(or perhaps not), but in any case, there is now a proposal to add a
FuelAgent driver to Ironic. The proposal claims this would meet that teams'
needs without requiring changes to the core of Ironic.

https://review.openstack.org/#/c/138115/

The Problem Description section calls out four things, which have all been
discussed previously (some are here [0]). I would like to address each one,
invite discussion on whether or not these are, in fact, problems facing
Ironic (not whether they are problems for someone, somewhere), and then ask
why these necessitate a new driver be added to the project.


They are, for reference:

1. limited partition support

2. no software RAID support

3. no LVM support

4. no support for hardware that lacks a BMC

#1.

When deploying a partition image (eg, QCOW format), Ironic's PXE deploy
driver performs only the minimal partitioning necessary to fulfill its
mission as an OpenStack service: respect the user's request for root, swap,
and ephemeral partition sizes. When deploying a whole-disk image, Ironic
does not perform any partitioning -- such is left up to the operator who
created the disk image.

Support for arbitrarily complex partition layouts is not required by, nor
does it facilitate, the goal of provisioning physical servers via a common
cloud API. Additionally, as with #3 below, nothing prevents a user from
creating more partitions in unallocated disk space once they have access to
their instance. Therefor, I don't see how Ironic's minimal support for
partitioning is a problem for the project.

#2.

There is no support for defining a RAID in Ironic today, at all, whether
software or hardware. Several proposals were floated last cycle; one is
under review right now for DRAC support [1], and there are multiple call
outs for RAID building in the state machine mega-spec [2]. Any such support
for hardware RAID will necessarily be abstract enough to support multiple
hardware vendor's driver implementations and both in-band creation (via
IPA) and out-of-band creation (via vendor tools).

Given the above, it may become possible to add software RAID support to IPA
in the future, under the same abstraction. This would closely tie the
deploy agent to the images it deploys (the latter image's kernel would be
dependent upon a software RAID built by the former), but this would
necessarily be true for the proposed FuelAgent as well.

I don't see this as a compelling reason to add a new driver to the project.
Instead, we should (plan to) add support for software RAID to the deploy
agent which is already part of the project.

#3.

LVM volumes can easily be added by a user (after provisioning) within
unallocated disk space for non-root partitions. I have not yet seen a
compelling argument for doing this within the provisioning phase.

#4.

There are already in-tree drivers [3] [4] [5] which do not require a BMC.
One of these uses SSH to connect and run pre-determined commands. Like the
spec proposal, which states at line 122, "Control via SSH access feature
intended only for experiments in non-production environment," the current
SSHPowerDriver is only meant for testing environments. We could probably
extend this driver to do what the FuelAgent spec proposes, as far as remote
power control for cheap always-on hardware in testing environments with a
pre-shared key.

(And if anyone wonders about a use case for Ironic without external power
control ... I can only think of one situation where I would rationally ever
want to have a control-plane agent running inside a user-instance: I am
both the operator and the only user of the cloud.)




In summary, as far as I can tell, all of the problem statements upon which
the FuelAgent proposal are based are solvable through incremental changes
in existing drivers, or out of scope for the project entirely. As another
software-based deploy agent, FuelAgent would duplicate the majority of the
functionality which ironic-python-agent has today.

Ironic's driver ecosystem benefits from a diversity of hardware-enablement
drivers. Today, we have two divergent software deployment drivers which
approach image deployment differently: "agent" drivers use a local agent to
prepare a system and download the image; "pxe" drivers use a remote agent
and copy the image over iSCSI. I don't understand how a second driver which
duplicates the functionality we already have, and shares the same goals as
the drivers we already have, is beneficial to the project.

Doing the same thing twice just increases the burden on the team; w

Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-12-08 Thread Stephen Balukoff
So...  I should probably note that I see the case where a user actually
shares object as being the exception. I expect that 90% of deployments will
never need to share objects, except for a few cases--  those cases (of 1:N)
relationships are:

* Loadbalancers must be able to have many Listeners
* When L7 functionality is introduced, L7 policies must be able to refer to
the same Pool under a single Listener. (That is to say, sharing Pools under
the scope of a single Listener makes sense, but only after L7 policies are
introduced.)

I specifically see the following kind of sharing having near zero demand:

* Listeners shared across multiple loadbalancers
* Pools shared across multiple listeners
* Members shared across multiple pools

So, despite the fact that sharing doesn't make status reporting any more or
less complex, I'm still in favor of starting with 1:1 relationships between
most kinds of objects and then changing those to 1:N or M:N as we get user
demand for this. As I said in my first response, allowing too many many to
many relationships feels like a solution to a problem that doesn't really
exist, and introduces a lot of unnecessary complexity.

Stephen

On Sun, Dec 7, 2014 at 11:43 PM, Samuel Bercovici 
wrote:

>  +1
>
>
>
>
>
> *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
> *Sent:* Friday, December 05, 2014 7:59 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS -
> Use Cases that led us to adopt this.
>
>
>
> German-- but the point is that sharing apparently has no effect on the
> number of permutations for status information. The only difference here is
> that without sharing it's more work for the user to maintain and modify
> trees of objects.
>
>
>
> On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German <
> german.eichber...@hp.com> wrote:
>
> Hi Brandon + Stephen,
>
>
>
> Having all those permutations (and potentially testing them) made us lean
> against the sharing case in the first place. It’s just a lot of extra work
> for only a small number of our customers.
>
>
>
> German
>
>
>
> *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
> *Sent:* Thursday, December 04, 2014 9:17 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS -
> Use Cases that led us to adopt this.
>
>
>
> Hi Brandon,
>
>
>
> Yeah, in your example, member1 could potentially have 8 different statuses
> (and this is a small example!)...  If that member starts flapping, it means
> that every time it flaps there are 8 notifications being passed upstream.
>
>
>
> Note that this problem actually doesn't get any better if we're not
> sharing objects but are just duplicating them (ie. not sharing objects but
> the user makes references to the same back-end machine as 8 different
> members.)
>
>
>
> To be honest, I don't see sharing entities at many levels like this being
> the rule for most of our installations-- maybe a few percentage points of
> installations will do an excessive sharing of members, but I doubt it. So
> really, even though reporting status like this is likely to generate a
> pretty big tree of data, I don't think this is actually a problem, eh. And
> I don't see sharing entities actually reducing the workload of what needs
> to happen behind the scenes. (It just allows us to conceal more of this
> work from the user.)
>
>
>
> Stephen
>
>
>
>
>
>
>
> On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan 
> wrote:
>
> Sorry it's taken me a while to respond to this.
>
> So I wasn't thinking about this correctly.  I was afraid you would have
> to pass in a full tree of parent child representations to /loadbalancers
> to update anything a load balancer it is associated to (including down
> to members).  However, after thinking about it, a user would just make
> an association call on each object.  For Example, associate member1 with
> pool1, associate pool1 with listener1, then associate loadbalancer1 with
> listener1.  Updating is just as simple as updating each entity.
>
> This does bring up another problem though.  If a listener can live on
> many load balancers, and a pool can live on many listeners, and a member
> can live on many pools, there's lot of permutations to keep track of for
> status.  you can't just link a member's status to a load balancer bc a
> member can exist on many pools under that load balancer, and each pool
> can exist under many listeners under that load balancer.  For example,
> say I have these:
>
> lb1
> lb2
> listener1
> listener2
> pool1
> pool2
> member1
> member2
>
> lb1 -> [listener1, listener2]
> lb2 -> [listener1]
> listener1 -> [pool1, pool2]
> listener2 -> [pool1]
> pool1 -> [member1, member2]
> pool2 -> [member1]
>
> member1 can now have a different statuses under pool1 and pool2.  since
> listener1 and listener2 both have pool1, this means member1 will now
> have a different

Re: [openstack-dev] [Cinder] Listing of backends

2014-12-08 Thread John Griffith
On Sun, Dec 7, 2014 at 5:35 AM, Pradip Mukhopadhyay
 wrote:
> Thanks!
>
> One more question.
>
> Is there any equivalent API to add keys to the volume-type? I understand we
> have APIs for creating volume-type? But how about adding key-value pair (say
> I want to add-key to the volume-type as backend-name="my_iscsi_backend" ?
>
>
> Thanks,
> Pradip
>
>
> On Sun, Dec 7, 2014 at 4:25 PM, Duncan Thomas 
> wrote:
>>
>> See https://review.openstack.org/#/c/119938/ - now merged. I don't believe
>> the python-cinderclient side work has been done yet, nor anything in
>> Horizon, but the API itself is now there.
>>
>> On 7 December 2014 at 09:53, Pradip Mukhopadhyay
>>  wrote:
>>>
>>> Hi,
>>>
>>>
>>> Is there a way to find out/list down the backends discovered for Cinder?
>>>
>>>
>>> There is, I guess, no API for get the list of backends.
>>>
>>>
>>>
>>> Thanks,
>>> Pradip
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Duncan Thomas
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Please try not to double post if you could...

Again in answer to your first question, you can do "cinder
service-list" to show the backends that are in use and their status.

Extra-Specs are added with the "type-key" command, so say you have
volume-type = foo:
   cinder type-key foo set key=value

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]

2014-12-08 Thread John Griffith
cinder service-list will show all the backends Cinder knows about and
their status.

On Sun, Dec 7, 2014 at 2:52 AM, Pradip Mukhopadhyay
 wrote:
> Hi,
>
>
> Is there a way to find out/list down the backends discovered for Cinder?
>
>
> There is, I guess, no API for get the list of backends.
>
>
>
> Thanks,
> Pradip
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-08 Thread Zane Bitter

On 08/12/14 07:00, Murugan, Visnusaran wrote:


Hi Zane & Michael,

Please have a look @ 
https://etherpad.openstack.org/p/execution-stream-and-aggregator-based-convergence

Updated with a combined approach which does not require persisting graph and 
backup stack removal.


Well, we still have to persist the dependencies of each version of a 
resource _somehow_, because otherwise we can't know how to clean them up 
in the correct order. But what I think you meant to say is that this 
approach doesn't require it to be persisted in a separate table where 
the rows are marked as traversed as we work through the graph.



This approach reduces DB queries by waiting for completion notification on a topic. The 
drawback I see is that delete stack stream will be huge as it will have the entire graph. 
We can always dump such data in ResourceLock.data Json and pass a simple flag 
"load_stream_from_db" to converge RPC call as a workaround for delete operation.


This seems to be essentially equivalent to my 'SyncPoint' proposal[1], 
with the key difference that the data is stored in-memory in a Heat 
engine rather than the database.


I suspect it's probably a mistake to move it in-memory for similar 
reasons to the argument Clint made against synchronising the marking off 
of dependencies in-memory. The database can handle that and the problem 
of making the DB robust against failures of a single machine has already 
been solved by someone else. If we do it in-memory we are just creating 
a single point of failure for not much gain. (I guess you could argue it 
doesn't matter, since if any Heat engine dies during the traversal then 
we'll have to kick off another one anyway, but it does limit our options 
if that changes in the future.)


It's not clear to me how the 'streams' differ in practical terms from 
just passing a serialisation of the Dependencies object, other than 
being incomprehensible to me ;). The current Dependencies implementation 
(1) is a very generic implementation of a DAG, (2) works and has plenty 
of unit tests, (3) has, with I think one exception, a pretty 
straightforward API, (4) has a very simple serialisation, returned by 
the edges() method, which can be passed back into the constructor to 
recreate it, and (5) has an API that is to some extent relied upon by 
resources, and so won't likely be removed outright in any event. 
Whatever code we need to handle dependencies ought to just build on this 
existing implementation.


I think the difference may be that the streams only include the 
*shortest* paths (there will often be more than one) to each resource. i.e.


 A <--- B <--- C
 ^ |
 | |
 +-+

can just be written as:

 A <--- B <--- C

because there's only one order in which that can execute anyway. (If 
we're going to do this though, we should just add a method to the 
dependencies.Graph class to delete redundant edges, not create a whole 
new data structure.) There is a big potential advantage here in that it 
reduces the theoretical maximum number of edges in the graph from O(n^2) 
to O(n). (Although in practice real templates are typically not likely 
to have such dense graphs.)


There's a downside to this too though: say that A in the above diagram 
is replaced during an update. In that case not only B but also C will 
need to figure out what the latest version of A is. One option here is 
to pass that data along via B, but that will become very messy to 
implement in a non-trivial example. The other would be for C to go 
search in the database for resources with the same name as A and the 
current traversal_id marked as the latest. But that not only creates a 
concurrency problem we didn't have before (A could have been updated 
with a new traversal_id at some point after C had established that the 
current traversal was still valid but before it went looking for A), it 
also eliminates all of the performance gains from removing that edge in 
the first place.


[1] 
https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/sync_point.py



To Stop current stack operation, we will use your traversal_id based approach.


+1 :)


If in case you feel Aggregator model creates more queues, then we might have to 
poll DB to get resource status. (Which will impact performance adversely :) )


For the reasons given above I would vote for doing this in the DB. I 
agree there will be a performance penalty for doing so, because we'll be 
paying for robustness.



Lock table: name(Unique - Resource_id), stack_id, engine_id, data (Json to 
store stream dict)


Based on our call on Thursday, I think you're taking the idea of the 
Lock table too literally. The point of referring to locks is that we can 
use the same concepts as the Lock table relies on to do atomic updates 
on a particular row of the database, and we can use those atomic updates 
to prevent race conditi

Re: [openstack-dev] [Compute][Nova] Mid-cycle meetup details for kilo

2014-12-08 Thread Michael Still
Just a reminder that registration for the Nova mid-cycle is now open.
We're currently 50% "sold", so early signup will help us work out if
we need to add more seats or not.

https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039

Thanks,
Michael

On Thu, Dec 4, 2014 at 10:18 AM, Michael Still  wrote:
> Sigh, sorry. It is of course the Kilo meetup:
>
> https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039
>
> Michael
>
> On Thu, Dec 4, 2014 at 10:16 AM, Michael Still  wrote:
>> I've just created the signup page for this event. Its here:
>>
>> https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-14767182039
>>
>> Cheers,
>> Michael
>>
>> On Wed, Oct 15, 2014 at 3:45 PM, Michael Still  wrote:
>>> Hi.
>>>
>>> I am pleased to announce details for the Kilo Compute mid-cycle
>>> meetup, but first some background about how we got here.
>>>
>>> Two companies actively involved in OpenStack came forward with offers
>>> to host the Compute meetup. However, one of those companies has
>>> gracefully decided to wait until the L release because of the cold
>>> conditions are their proposed location (think several feet of snow).
>>>
>>> So instead, we're left with California!
>>>
>>> The mid-cycle meetup will be from 26 to 28 January 2015, at the VMWare
>>> offices in Palo Alto California.
>>>
>>> Thanks for VMWare for stepping up and offering to host. It sure does
>>> make my life easy.
>>>
>>> More details will be forthcoming closer to the event, but I wanted to
>>> give people as much notice as possible about dates and location so
>>> they can start negotiating travel if they want to come.
>>>
>>> Cheers,
>>> Michael
>>>
>>> --
>>> Rackspace Australia
>>
>>
>>
>> --
>> Rackspace Australia
>
>
>
> --
> Rackspace Australia



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support

2014-12-08 Thread melanie witt
On Dec 8, 2014, at 13:12, Jeremy Stanley  wrote:

> I'm dubious of this as it basically says "we know this breaks
> sometimes, so we're going to stop testing that it works at all and
> possibly let it get even more broken, but you should be safe to rely
> on it anyway."

+1, it seems bad to enable something everywhere *except* the gate.

I prefer the original suggestion to include a config option that is by default 
disabled that a user can enable if they want.

From what I understand, the feature works "most of the time" and I don't see 
why a user is guaranteed not to encounter the same conditions that happen in 
the gate. For that reason I think it makes sense to be an experimental, opt-in 
by config, feature.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-08 Thread Maru Newby

On Dec 7, 2014, at 10:51 AM, Gary Kotton  wrote:

> Hi Kyle,
> I am not missing the point. I understand the proposal. I just think that it 
> has some shortcomings (unless I misunderstand, which will certainly not be 
> the first time and most definitely not the last). The thinning out is to have 
> a shim in place. I understand this and this will be the entry point for the 
> plugin. I do not have a concern for this. My concern is that we are not doing 
> this with the ML2 off the bat. That should lead by example as it is our 
> reference architecture. Lets not kid anyone, but we are going  to hit some 
> problems with the decomposition. I would prefer that it be done with the 
> default implementation. Why?

The proposal is to move vendor-specific logic out of the tree to increase 
vendor control over such code while decreasing load on reviewers.  ML2 doesn’t 
contain vendor-specific logic - that’s the province of ML2 drivers - so it is 
not a good target for the proposed decomposition by itself.


>   • Cause we will fix them quicker as it is something that prevent 
> Neutron from moving forwards
>   • We will just need to fix in one place first and not in N (where N is 
> the vendor plugins)
>   • This is a community effort – so we will have a lot more eyes on it
>   • It will provide a reference architecture for all new plugins that 
> want to be added to the tree
>   • It will provide a working example for plugin that are already in tree 
> and are to be replaced by the shim
> If we really want to do this, we can say freeze all development (which is 
> just approvals for patches) for a few days so that we will can just focus on 
> this. I stated what I think should be the process on the review. For those 
> who do not feel like finding the link:
>   • Create a stack forge project for ML2
>   • Create the shim in Neutron
>   • Update devstack for the to use the two repos and the shim
> When #3 is up and running we switch for that to be the gate. Then we start a 
> stopwatch on all other plugins.

As was pointed out on the spec (see Miguel’s comment on r15), the ML2 plugin 
and the OVS mechanism driver need to remain in the main Neutron repo for now.  
Neutron gates on ML2+OVS and landing a breaking change in the Neutron repo 
along with its corresponding fix to a separate ML2 repo would be all but 
impossible under the current integrated gating scheme.  Plugins/drivers that do 
not gate Neutron have no such constraint.


Maru


> Sure, I’ll catch you on IRC tomorrow. I guess that you guys will bash out the 
> details at the meetup. Sadly I will not be able to attend – so you will have 
> to delay on the tar and feathers.
> Thanks
> Gary
> 
> 
> From: "mest...@mestery.com" 
> Reply-To: OpenStack List 
> Date: Sunday, December 7, 2014 at 7:19 PM
> To: OpenStack List 
> Cc: "openst...@lists.openstack.org" 
> Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition
> 
> Gary, you are still miss the point of this proposal. Please see my comments 
> in review. We are not forcing things out of tree, we are thinning them. The 
> text you quoted in the review makes that clear. We will look at further 
> decomposing ML2 post Kilo, but we have to be realistic with what we can 
> accomplish during Kilo.
> 
> Find me on IRC Monday morning and we can discuss further if you still have 
> questions and concerns.
> 
> Thanks!
> Kyle
> 
> On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton  wrote:
>> Hi,
>> I have raised my concerns on the proposal. I think that all plugins should 
>> be treated on an equal footing. My main concern is having the ML2 plugin in 
>> tree whilst the others will be moved out of tree will be problematic. I 
>> think that the model will be complete if the ML2 was also out of tree. This 
>> will help crystalize the idea and make sure that the model works correctly.
>> Thanks
>> Gary
>> 
>> From: "Armando M." 
>> Reply-To: OpenStack List 
>> Date: Saturday, December 6, 2014 at 1:04 AM
>> To: OpenStack List , 
>> "openst...@lists.openstack.org" 
>> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition
>> 
>> Hi folks,
>> 
>> For a few weeks now the Neutron team has worked tirelessly on [1].
>> 
>> This initiative stems from the fact that as the project matures, evolution 
>> of processes and contribution guidelines need to evolve with it. This is to 
>> ensure that the project can keep on thriving in order to meet the needs of 
>> an ever growing community.
>> 
>> The effort of documenting intentions, and fleshing out the various details 
>> of the proposal is about to reach an end, and we'll soon kick the tires to 
>> put the proposal into practice. Since the spec has grown pretty big, I'll 
>> try to capture the tl;dr below.
>> 
>> If you have any comment please do not hesitate to raise them here and/or 
>> reach out to us.
>> 
>> tl;dr >>>
>> 
>> From the Kilo release, we'll initiate a set of steps to change the following 
>> areas:
>>  • 

Re: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support

2014-12-08 Thread Jeremy Stanley
On 2014-12-08 11:45:36 +0100 (+0100), Kashyap Chamarthy wrote:
> As Dan Berrangé noted, it's nearly impossible to reproduce this issue
> independently outside of OpenStack Gating environment. I brought this up
> at the recently concluded KVM Forum earlier this October. To debug this
> any further, one of the QEMU block layer developers asked if we can get
> QEMU instance running on Gate run under `gdb` (IIRC, danpb suggested
> this too, previously) to get further tracing details.

We document thoroughly how to reproduce the environments we use for
testing OpenStack. There's nothing rarified about "a Gate run" that
anyone with access to a public cloud provider would be unable to
reproduce, save being able to run it over and over enough times to
expose less frequent failures.

> FWIW, I myself couldn't reproduce it independently via libvirt
> alone or via QMP (QEMU Machine Protocol) commands.
> 
> Dan's workaround ("enable it permanently, except for under the
> gate") sounds sensible to me.
[...]

I'm dubious of this as it basically says "we know this breaks
sometimes, so we're going to stop testing that it works at all and
possibly let it get even more broken, but you should be safe to rely
on it anyway."

The QA team tries very hard to make our integration testing
environment as closely as possible mimic real-world deployment
configurations. If these sorts of bugs emerge more often because of,
for example, resource constraints in the test environment then it
should be entirely likely they'd also be seen in production with the
same frequency if run on similarly constrained equipment. And as
we've observed in the past, any code path we stop testing quickly
accumulates new bugs that go unnoticed until they impact someone's
production environment at 3am.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] interesting problem with config filter

2014-12-08 Thread Doug Hellmann
As we’ve discussed a few times, we want to isolate applications from the 
configuration options defined by libraries. One way we have of doing that is 
the ConfigFilter class in oslo.config. When a regular ConfigOpts instance is 
wrapped with a filter, a library can register new options on the filter that 
are not visible to anything that doesn’t have the filter object. Unfortunately, 
the Neutron team has identified an issue with this approach. We have a bug 
report [1] from them about the way we’re using config filters in 
oslo.concurrency specifically, but the issue applies to their use everywhere. 

The neutron tests set the default for oslo.concurrency’s lock_path variable to 
“$state_path/lock”, and the state_path option is defined in their application. 
With the filter in place, interpolation of $state_path to generate the 
lock_path value fails because state_path is not known to the ConfigFilter 
instance.

The reverse would also happen (if the value of state_path was somehow defined 
to depend on lock_path), and that’s actually a bigger concern to me. A deployer 
should be able to use interpolation anywhere, and not worry about whether the 
options are in parts of the code that can see each other. The values are all in 
one file, as far as they know, and so interpolation should “just work”.

I see a few solutions:

1. Don’t use the config filter at all.
2. Make the config filter able to add new options and still see everything else 
that is already defined (only filter in one direction).
3. Leave things as they are, and make the error message better.

Because of the deployment implications of using the filter, I’m inclined to go 
with choice 1 or 2. However, choice 2 leaves open the possibility of a deployer 
wanting to use the value of an option defined by one filtered set of code when 
defining another. I don’t know how frequently that might come up, but it seems 
like the error would be very confusing, especially if both options are set in 
the same config file.

I think that leaves option 1, which means our plans for hiding options from 
applications need to be rethought.

Does anyone else see another solution that I’m missing?

Doug

[1] https://bugs.launchpad.net/oslo.config/+bug/1399897
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] FYI: I just abandoned a large pile of specs

2014-12-08 Thread Kyle Mestery
Folks, not only is today Spec Proposal Deadline (SPD), I also made it
"clear out all old specs" day. I went in and abandoned all specs which were
still out there against Juno. If your spec was abandoned and you were
miraculously going to propose a new version today during SPD, please do so.
Otherwise, I think this will clear up the review queue quite a bit.

Thanks!
Kyle
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding temporary code to nova to work around bugs in system utilities

2014-12-08 Thread Jay Pipes

On 12/03/2014 04:00 AM, Tony Breeds wrote:

Hi All,
 I'd like to accomplish 2 things with this message:
1) Unblock (one way or another) https://review.openstack.org/#/c/123957
2) Create some form of consensus on when it's okay to add temporary code to
nova to work around bugs in external utilities.

So some background on this specific issue.  The issue was first reported in
July 2014 at [1] and then clarified at [2].  The synopsis of the bug is that
calling qemu-img convert -O raw /may/ generate a corrupt output file if the
source image isn't fully flushed to disk.  The coreutils folk discovered
something similar in 2011 *sigh*

The clear and correct solution is to ensure that qemu-img uses
FIEMAP_FLAG_SYNC.  This in turn produces a measurable slowdown in that code
path, so additionally it's best if qemu-img uses an alternate method to
determine data status in a disk image.  This has been done and will be included
in qemu 2.2.0 when it's released.  These fixes prompted a more substantial
rework of that code in qemu.  Which is awesome but not *required* to fix the
bug in qemu.

While we wait for $distros to get the fixed qemu nova is still vulnerable to
the bug.  To that end I proposed a work around in nova that forces images
retrieved from glance to disk with an fsync() prior to calling qemu-img on
them.  I admit that this is ugly and has a performance impact.

In order to reduce the impact of the fsync() I considered:
1) Testing the qemu version and only fsync()ing on affected versions.
- Vendors will backport the fix to there version of qemu.  The fixed version
  will still claim to be 2.1.0 (for example) and therefore trigger the
  fsync() when not required.  Given how unreliable this will be I dismissed
  it as an option

2) API Change
- In the case of this specific bug we only need to fsync() in certain
  scenarios.  It would be easy to add a flag to IMAGE_API.download() to
  determine if this fsync() is required.  This has the nice property of only
  having a performance impact in the suspect case (personally I'll take
  slow-and-correct over fast-and-buggy any day).  My hesitation is that
  after we've modified the API it's very hard to remove that change when we
  decide the work around is redundant.

3) Config file option
- For many of the same reasons as the API change this seemed like a bad
  idea.

Does anyone have any other ideas?

One thing that I haven't done is measure the impact of the fsync() on any
reasonable workload.  This is mainly because I don't really know how.  Sure I
could do some statistics in devstack but I don't really think they'd be
meaningful.  Also the size of the image in glance is fairly important.  An
fsync() of an 100Gb image is many times more painful than an 1Gb image.

While in Paris I was asked to look at other code paths in nova where we use
qemu-img convert.  I'm doing this analysis.  To date I have some suspicions
that snapshot (and migration) are affected, but no data that confirms or
debases that.  I continue to look at the appropriate code in nova, libvirt and
qemu.

I understand that there is more work to be done in this area, and I'm happy to
do it.  Having said that from where I sit that work is not directly related to
the bug that started this.

As the idea is to remove this code as soon as all the distros we care about
have a fixed qemu I started an albeit brief discussion here[3] on which distros
are in that list.  Armed with that list I have opened (or am in the process of
opening) bugs for each version of each distribution to make them aware of the
issue and the fix.  I have a status page at [4].

okay I think I'm done raving.

So moving forward:

1) So what should I do with the open review?


I reviewed the patch. I don't mind the idea of a [workarounds] section 
of configuration options, but I had an issue with where that code was 
executed.



2) What can we learn from this in terms of how we work around key utilities
that are not in our direct power to change.
- Is taking ugly code for "some time" okay?  I understand that this is a
  complex issue as we're relying on $developer to be around (or leave enough
  information for those that follow) to determine when it's okay to remove
  the ugliness.


I think it would be fine to have a [workarounds] config section for just 
this purpose.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-12-08 Thread Vadivel Poonathan
Hi Anne,

I provided my comment in the review itself.. and pasted below for your
quick view.

Thanks,
Vad
--

Vadivel Poonathan12:05 PM

I understand the "other drivers" mean the external drivers which are not
part of the official openstack repository. So the reference drivers which
are part of the official openstack repository will be "fully" documented
and for the other such external drivers, this new idea of listing them with
a short description and a link to vendor provided webpage is proposed!.

So why again it is mentioned as ""Only drivers are covered that are
contained in the official OpenStack project repository""  - i 'm
confused!.

If this new proposal (short desc and external link) is again meant for only
the drivers that are part of official openstack main repository - then why
do we need this proposal itself?...

I believe it originally triggered from the fact that we need a placeholder
for listing the out-of-tree drivers. Since they are not part of the
official openstack release/repository, they can not be documented or listed
in the current existing documentation. Hence this idea of providing a
placeholder with short desc and external link is proposed. Hence the
out-of-tree vendors will maintain their plugin/drivers and detailed
documentation.

Pls. let me know if i missing something.

On Thu, Dec 4, 2014 at 9:18 AM, Anne Gentle  wrote:

> Hi Vadivel,
> We do have a blueprint in the docs-specs repo under review for driver
> documentation and I'd like to get your input.
> https://review.openstack.org/#/c/133372/
>
> Here's a relevant excerpt:
>
> The documentation team will fully document the reference drivers as
> specified below and just add short sections for other drivers.
>
> Guidelines for drivers that will be documented fully in the OpenStack
> documentation:
>
> * The complete solution must be open source and use standard hardware
> * The driver must be part of the respective OpenStack repository
> * The driver is considered one of the reference drivers
>
> For documentation of other drivers, the following guidelines apply:
>
> * The Configuration Reference will contain a small section for each
>   driver, see below for details
> * Only drivers are covered that are contained in the official
>   OpenStack project repository for drivers (for example in the main
>   project repository or the official "third party" repository).
>
> With this policy, the docs team will document in their guides the
> following:
>
> * For cinder: volume drivers: document LVM only (TBD later: Samba,
>   glusterfs); backup drivers: document swift (TBD later: ceph)
> * For glance: Document local storage, cinder, and swift as backends
> * For neutron: document ML2 plug-in with the mechanisms drivers
>   OpenVSwitch and LinuxBridge
> * For nova: document KVM (mostly), send Xen open source call for help
> * For sahara: apache hadoop
> * For trove: document all supported Open Source database engines like
>   MySQL.
>
> Let us know in the review itself if this answers your question about
> third-party drivers not in an official repository.
> Thanks,
> Anne
>
> On Thu, Dec 4, 2014 at 9:59 AM, Vadivel Poonathan <
> vadivel.openst...@gmail.com> wrote:
>
>> Hi Kyle and all,
>>
>> Was there any conclusion in the design summit or the meetings afterward
>> about splitting the vendor plugins/drivers from the mainstream neutron and
>> documentation of out-of-tree plugins/drivers?...
>>
>> Thanks,
>> Vad
>> --
>>
>>
>> On Thu, Oct 23, 2014 at 11:27 AM, Kyle Mestery 
>> wrote:
>>
>>> On Thu, Oct 23, 2014 at 12:35 PM, Vadivel Poonathan
>>>  wrote:
>>> > Hi Kyle and Anne,
>>> >
>>> > Thanks for the clarifications... understood and it makes sense.
>>> >
>>> > However, per my understanding, the drivers (aka plugins) are meant to
>>> be
>>> > developed and supported by third-party vendors, outside of the
>>> OpenStack
>>> > community, and they are supposed to work as plug-n-play... they are
>>> not part
>>> > of the core OpenStack development, nor any of its components. If that
>>> is the
>>> > case, then why should OpenStack community include and maintain them as
>>> part
>>> > of it, for every release?...  Wouldnt it be enough to limit the scope
>>> with
>>> > the plugin framework and built-in drivers such as LinuxBridge or OVS
>>> etc?...
>>> > not extending to commercial vendors?...  (It is just a curious
>>> question,
>>> > forgive me if i missed something and correct me!).
>>> >
>>> You haven't misunderstood anything, we're in the process of splitting
>>> these things out, and this will be a prime focus of the Neutron design
>>> summit track at the upcoming summit.
>>>
>>> Thanks,
>>> Kyle
>>>
>>> > At the same time, IMHO, there must be some reference or a page within
>>> the
>>> > scope of OpenStack documentation (not necessarily the core docs, but
>>> some
>>> > wiki page or reference link or so - as Anne suggested) to mention the
>>> list
>>> > of the drivers/plugins supported as of given release and may be an
>>> ext

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-08 Thread Carl Baldwin
Ryan,

I'll be traveling around the time of the L3 meeting this week.  My
flight leaves 40 minutes after the meeting and I might have trouble
attending.  It might be best to put it off a week or to plan another
time -- maybe Friday -- when we could discuss it in IRC or in a
Hangout.

Carl

On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
 wrote:
> Thanks for getting back Carl. I think we may be able to make this weeks
> meeting. Jason Kölker is the engineer doing all of the lifting on this side.
> Let me get with him to review what you all have so far and check our
> availability.
>
> 
>
> Ryan Clevenger
> Manager, Cloud Engineering - US
> m: 678.548.7261
> e: ryan.cleven...@rackspace.com
>
> 
> From: Carl Baldwin [c...@ecbaldwin.net]
> Sent: Sunday, December 07, 2014 4:04 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation
> and collaboration
>
> Ryan,
>
> I have been working with the L3 sub team in this direction.  Progress has
> been slow because of other priorities but we have made some.  I have written
> a blueprint detailing some changes needed to the code to enable the
> flexibility to one day run glaring ups on an l3 routed network [1].  Jaime
> has been working on one that integrates ryu (or other speakers) with neutron
> [2].  Dvr was also a step in this direction.
>
> I'd like to invite you to the l3 weekly meeting [3] to discuss further.  I'm
> very happy to see interest in this area and have someone new to collaborate.
>
> Carl
>
> [1] https://review.openstack.org/#/c/88619/
> [2] https://review.openstack.org/#/c/125401/
> [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
>
> On Dec 3, 2014 4:04 PM, "Ryan Clevenger" 
> wrote:
>>
>> Hi,
>>
>> At Rackspace, we have a need to create a higher level networking service
>> primarily for the purpose of creating a Floating IP solution in our
>> environment. The current solutions for Floating IPs, being tied to plugin
>> implementations, does not meet our needs at scale for the following reasons:
>>
>> 1. Limited endpoint H/A mainly targeting failover only and not
>> multi-active endpoints,
>> 2. Lack of noisy neighbor and DDOS mitigation,
>> 3. IP fragmentation (with cells, public connectivity is terminated inside
>> each cell leading to fragmentation and IP stranding when cell CPU/Memory use
>> doesn't line up with allocated IP blocks. Abstracting public connectivity
>> away from nova installations allows for much more efficient use of those
>> precious IPv4 blocks).
>> 4. Diversity in transit (multiple encapsulation and transit types on a per
>> floating ip basis).
>>
>> We realize that network infrastructures are often unique and such a
>> solution would likely diverge from provider to provider. However, we would
>> love to collaborate with the community to see if such a project could be
>> built that would meet the needs of providers at scale. We believe that, at
>> its core, this solution would boil down to terminating north<->south traffic
>> temporarily at a massively horizontally scalable centralized core and then
>> encapsulating traffic east<->west to a specific host based on the
>> association setup via the current L3 router's extension's 'floatingips'
>> resource.
>>
>> Our current idea, involves using Open vSwitch for header rewriting and
>> tunnel encapsulation combined with a set of Ryu applications for management:
>>
>> https://i.imgur.com/bivSdcC.png
>>
>> The Ryu application uses Ryu's BGP support to announce up to the Public
>> Routing layer individual floating ips (/32's or /128's) which are then
>> summarized and announced to the rest of the datacenter. If a particular
>> floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
>> etc.), the Ryu application could change the announcements up to the Public
>> layer to shift that traffic to dedicated hosts setup for that purpose. It
>> also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet
>> Routing system which provides transit to and from the cells and their
>> hypervisors. Since traffic from either direction can then end up on any of
>> the FLIP hosts, a simple flow table to modify the MAC and IP in either the
>> SRC or DST fields (depending on traffic direction) allows the system to be
>> completely stateless. We have proven this out (with static routing and
>> flows) to work reliably in a small lab setup.
>>
>> On the hypervisor side, we currently plumb networks into separate OVS
>> bridges. Another Ryu application would control the bridge that handles
>> overlay networking to selectively divert traffic destined for the default
>> gateway up to the FLIP NAT systems, taking into account any configured
>> logical routing and local L2 traffic to pass out into the existing overlay
>> fabric undisturbed.
>>
>> Adding in support for L2VPN EVPN
>> (https://tools.ietf.org/html/draft-ietf-l2vpn-ev

Re: [openstack-dev] Where should Schema files live?

2014-12-08 Thread Lance Bragstad
Keystone also has API documentation in the keystone-spec repo [1], which
went in with [2] and [3].

[1] https://github.com/openstack/keystone-specs/tree/master/api
[2] https://review.openstack.org/#/c/128712/
[3] https://review.openstack.org/#/c/130577/

On Mon, Dec 8, 2014 at 1:06 PM, Adam Young  wrote:

> Isn't this what the API repos are for?  Should EG the Keystone schemes be
> served from
>
> https://github.com/openstack/identity-api/
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-12-08 Thread Adam Young
Isn't this what the API repos are for?  Should EG the Keystone schemes 
be served from


https://github.com/openstack/identity-api/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-08 Thread Paul Michali (pcm)
Way to go Kevin and Henry!



PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Dec 8, 2014, at 11:02 AM, Kyle Mestery  wrote:

> On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery  wrote:
> Now that we're in the thick of working hard on Kilo deliverables, I'd
> like to make some changes to the neutron core team. Reviews are the
> most important part of being a core reviewer, so we need to ensure
> cores are doing reviews. The stats for the 180 day period [1] indicate
> some changes are needed for cores who are no longer reviewing.
> 
> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
> neutron-core. Bob and Nachi have been core members for a while now.
> They have contributed to Neutron over the years in reviews, code and
> leading sub-teams. I'd like to thank them for all that they have done
> over the years. I'd also like to propose that should they start
> reviewing more going forward the core team looks to fast track them
> back into neutron-core. But for now, their review stats place them
> below the rest of the team for 180 days.
> 
> As part of the changes, I'd also like to propose two new members to
> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
> been very active in reviews, meetings, and code for a while now. Henry
> lead the DB team which fixed Neutron DB migrations during Juno. Kevin
> has been actively working across all of Neutron, he's done some great
> work on security fixes and stability fixes in particular. Their
> comments in reviews are insightful and they have helped to onboard new
> reviewers and taken the time to work with people on their patches.
> 
> Existing neutron cores, please vote +1/-1 for the addition of Henry
> and Kevin to the core team.
> 
> Enough time has passed now, and Kevin and Henry have received enough +1 
> votes. So I'd like to welcome them to the core team!
> 
> Thanks,
> Kyle
>  
> Thanks!
> Kyle
> 
> [1] http://stackalytics.com/report/contribution/neutron-group/180
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] Third-party CI account creation is now self-serve

2014-12-08 Thread Anita Kuno
On 12/08/2014 09:33 AM, Jay Pipes wrote:
> On 12/03/2014 03:56 PM, Anita Kuno wrote:
>> As of now third-party CI account creation is now self-serve. I think
>> this makes everybody happy.
>>
>> What does this mean?
>>
>> Well for a new third-party account this means you follow the new
>> process, outlined here:
>> http://ci.openstack.org/third_party.html#creating-a-service-account
>>
>> If you don't have enough information from these docs, please contact the
>> infra team then we will work on a patch once you learn what you needed,
>> to fill in the holes for others.
>>
>> If you currently have a third-party CI account on Gerrit, this is what
>> will happen with your account:
>> http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system
>>
>>
>> Short story is we will be moving voting accounts into project specific
>> voting groups. Your voting rights will not change, but will be directly
>> managed by project release groups.
>> Non voting accounts will be removed from the now redundant Third-Party
>> CI group and otherwise will not be changed.
>>
>> If you are a member of a -release group for a project currently
>> receiving third-party CI votes, you will find that you have access to
>> manage membership in a new group in Gerrit called -ci.  To
>> allow a CI system to vote on your project, add it to the -ci
>> group, and to disable voting on your project, remove them from that
>> group.
>>
>> We hope you are as excited about this change as we are.
>>
>> Let us know if you have questions, do try to work with third-party
>> project representatives as much as you can.
> 
> Excellent work, Anita and the infra team, thank you so much!
> 
> -jay
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thanks Jay.

Clark and Jeremy did the heavy lifting here. We are glad this is in
place, hopefully this will work well for all concerned.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-08 Thread Kyle Mestery
On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery  wrote:

> Now that we're in the thick of working hard on Kilo deliverables, I'd
> like to make some changes to the neutron core team. Reviews are the
> most important part of being a core reviewer, so we need to ensure
> cores are doing reviews. The stats for the 180 day period [1] indicate
> some changes are needed for cores who are no longer reviewing.
>
> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
> neutron-core. Bob and Nachi have been core members for a while now.
> They have contributed to Neutron over the years in reviews, code and
> leading sub-teams. I'd like to thank them for all that they have done
> over the years. I'd also like to propose that should they start
> reviewing more going forward the core team looks to fast track them
> back into neutron-core. But for now, their review stats place them
> below the rest of the team for 180 days.
>
> As part of the changes, I'd also like to propose two new members to
> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
> been very active in reviews, meetings, and code for a while now. Henry
> lead the DB team which fixed Neutron DB migrations during Juno. Kevin
> has been actively working across all of Neutron, he's done some great
> work on security fixes and stability fixes in particular. Their
> comments in reviews are insightful and they have helped to onboard new
> reviewers and taken the time to work with people on their patches.
>
> Existing neutron cores, please vote +1/-1 for the addition of Henry
> and Kevin to the core team.
>
> Enough time has passed now, and Kevin and Henry have received enough +1
votes. So I'd like to welcome them to the core team!

Thanks,
Kyle


> Thanks!
> Kyle
>
> [1] http://stackalytics.com/report/contribution/neutron-group/180
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] services split starting today

2014-12-08 Thread Kyle Mestery
Copying the operators list here to gain additional visibility for trunk
chasers.

On Mon, Dec 8, 2014 at 10:19 AM, Doug Wiegley  wrote:

> To all neutron cores,
>
> Please do not approve any gerrit reviews for advanced services code for
> the next few days.  We will post again when those reviews can resume.
>
> Thanks,
> Doug
>
>
>
> On 12/8/14, 8:49 AM, "Doug Wiegley"  wrote:
>
> >Hi all,
> >
> >The neutron advanced services split is starting today at 9am PDT, as
> >described here:
> >
> >https://review.openstack.org/#/c/136835/
> >
> >
> >.. The remove change from neutron can be seen here:
> >
> >https://review.openstack.org/#/c/139901/
> >
> >
> >.. While the new repos are being sorted out, advanced services will be
> >broken, and services tempest tests will be disabled.  Either grab Juno, or
> >an earlier rev of neutron.
> >
> >The new repos are: neutron-lbaas, neutron-fwaas, neutron-vpnaas.
> >
> >Thanks,
> >Doug
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-12-08 Thread Eoghan Glynn


> >From: Sandy Walsh [sandy.wa...@rackspace.com] Monday, December 01, 2014 9:29
> >AM
> > 
> >>From: Duncan Thomas [duncan.tho...@gmail.com]
> >>Sent: Sunday, November 30, 2014 5:40 AM
> >>To: OpenStack Development Mailing List
> >>Subject: Re: [openstack-dev] Where should Schema files live?
> >> 
> >>Duncan Thomas
> >>On Nov 27, 2014 10:32 PM, "Sandy Walsh"  wrote:
> >>> 
> >>> We were thinking each service API would expose their schema via a new
> >>> /schema resource (or something). Nova would expose its schema. Glance
> >>> its own. etc. This would also work well for installations still using
> >>> older deployments.
> >>This feels like externally exposing info that need not be external (since
> >>the notifications are not external to the deploy) and it sounds like it
> >>will potentially leak fine detailed version and maybe deployment config
> >>details that you don't want to make public - either for commercial reasons
> >>or to make targeted attacks harder
> >> 
> > 
> >Yep, good point. Makes a good case for standing up our own service or just
> >relying on the tarballs being in a well know place.
> 
> Hmm, I wonder if it makes sense to limit the /schema resource to service
> accounts. Expose it by role.
> 
> There's something in the back of my head that doesn't like calling out to the
> public API though. Perhaps unfounded.

I'm wondering here how this relates to the other URLs in the
service catalog that aren't intended for external consumption,
e.g. the internalURL and adminURL.

I had assumed that these URLs would be visible to external clients,
but protected by firewall rules such that clients would be unable
to do anything in anger with those raw addresses from the outside.

So would including a schemaURL in the service catalog actually
expose an attack surface, assuming this was in general safely
firewalled off in any realistic deployment?

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Using query string or request body to pass parameter

2014-12-08 Thread Kevin L. Mitchell
On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote:
> I wonder if we can use body in delete, currently , there isn't any
> case used in v2/v3 api.

No, many frameworks raise an error if you try to include a body with a
DELETE request.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log 12/08/2014

2014-12-08 Thread Nikolay Makhotkin
Thanks for joining our team meeting today!

 * Meeting minutes:
*http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-08-16.04.log.html
*
 * Meeting log:
*http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-08-16.04.html
*

The next meeting is scheduled for Dec 15 at 16.00 UTC.

-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Freeze on L3 agent

2014-12-08 Thread Carl Baldwin
For the next few weeks, we'll be tackling L3 agent restructuring [1]
in earnest.  This will require some heavy lifting, especially
initially, in the l3_agent.py file.  Because of this, I'd like to ask
that we not approve any non-critical changes to the L3 agent that are
unrelated to this restructuring starting today.  After the heavy
lifting has merged, I will notify again.  I imagine that this effort
will take a few weeks realistically.

Carl

[1] https://review.openstack.org/#/c/131535/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Remove XML support from Nova API

2014-12-08 Thread Davanum Srinivas
Hi Team,

We currently have disabled XML support when
https://review.openstack.org/#/c/134332/ merged.

I've prepared a followup patch series to entirely remove XML support
[1] soon after we ship K1. I've marked it as WIP for now though all
tests are working fine.

Looking forward to your feedback.

thanks,
dims

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:nuke-xml,n,z

-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] Third-party CI account creation is now self-serve

2014-12-08 Thread Jay Pipes

On 12/03/2014 03:56 PM, Anita Kuno wrote:

As of now third-party CI account creation is now self-serve. I think
this makes everybody happy.

What does this mean?

Well for a new third-party account this means you follow the new
process, outlined here:
http://ci.openstack.org/third_party.html#creating-a-service-account

If you don't have enough information from these docs, please contact the
infra team then we will work on a patch once you learn what you needed,
to fill in the holes for others.

If you currently have a third-party CI account on Gerrit, this is what
will happen with your account:
http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system

Short story is we will be moving voting accounts into project specific
voting groups. Your voting rights will not change, but will be directly
managed by project release groups.
Non voting accounts will be removed from the now redundant Third-Party
CI group and otherwise will not be changed.

If you are a member of a -release group for a project currently
receiving third-party CI votes, you will find that you have access to
manage membership in a new group in Gerrit called -ci.  To
allow a CI system to vote on your project, add it to the -ci
group, and to disable voting on your project, remove them from that group.

We hope you are as excited about this change as we are.

Let us know if you have questions, do try to work with third-party
project representatives as much as you can.


Excellent work, Anita and the infra team, thank you so much!

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] services split starting today

2014-12-08 Thread Doug Wiegley
To all neutron cores,

Please do not approve any gerrit reviews for advanced services code for
the next few days.  We will post again when those reviews can resume.

Thanks,
Doug



On 12/8/14, 8:49 AM, "Doug Wiegley"  wrote:

>Hi all,
>
>The neutron advanced services split is starting today at 9am PDT, as
>described here:
>
>https://review.openstack.org/#/c/136835/
>
>
>.. The remove change from neutron can be seen here:
>
>https://review.openstack.org/#/c/139901/
>
>
>.. While the new repos are being sorted out, advanced services will be
>broken, and services tempest tests will be disabled.  Either grab Juno, or
>an earlier rev of neutron.
>
>The new repos are: neutron-lbaas, neutron-fwaas, neutron-vpnaas.
>
>Thanks,
>Doug
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Query on creating multiple resources

2014-12-08 Thread Nikolay Makhotkin
Hi, Sushma!

Can we create multiple resources using a single task, like multiple
> keypairs or security-groups or networks etc?


Yes, we can. This feature is in the development now and it is considered as
experimental -
https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections

Just clone the last master branch from mistral.

You can specify "for-each" task property and provide the array of data to
your workflow:

 

version: '2.0'

name: secgroup_actions

workflows:
  create_security_group:
type: direct
input:
  - array_with_names_and_descriptions

tasks:
  create_secgroups:

for-each:

  data: $.array_with_names_and_descriptions
action: nova.security_groups_create name={$.data.name}
description={$.data.description}


On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter  wrote:

> On 08/12/14 09:41, Sushma Korati wrote:
>
>> Can we create multiple resources using a single task, like multiple
>> keypairs or security-groups or networks etc?
>>
>
> Define them in a Heat template and create the Heat stack as a single task.
>
> - ZB
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] services split starting today

2014-12-08 Thread Doug Wiegley
Hi all,

The neutron advanced services split is starting today at 9am PDT, as
described here:

https://review.openstack.org/#/c/136835/


.. The remove change from neutron can be seen here:

https://review.openstack.org/#/c/139901/


.. While the new repos are being sorted out, advanced services will be
broken, and services tempest tests will be disabled.  Either grab Juno, or
an earlier rev of neutron.

The new repos are: neutron-lbaas, neutron-fwaas, neutron-vpnaas.

Thanks,
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Horizon] drag and drop widget in horizon.

2014-12-08 Thread uday bhaskar
We are looking for documentation for the widget used launch instance form
of the horizon, where on the network tab, you are able to select networks
in a particular order. How is this implemented? is there any widget
available to reuse? any help is appreciated.






Thanks
Uday Bhaskar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-08 Thread Ryan Clevenger
Thanks for getting back Carl. I think we may be able to make this weeks 
meeting. Jason Kölker is the engineer doing all of the lifting on this side. 
Let me get with him to review what you all have so far and check our 
availability.




Ryan Clevenger
Manager, Cloud Engineering - US
m: 678.548.7261
e: ryan.cleven...@rackspace.com


From: Carl Baldwin [c...@ecbaldwin.net]
Sent: Sunday, December 07, 2014 4:04 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration


Ryan,

I have been working with the L3 sub team in this direction.  Progress has been 
slow because of other priorities but we have made some.  I have written a 
blueprint detailing some changes needed to the code to enable the flexibility 
to one day run glaring ups on an l3 routed network [1].  Jaime has been working 
on one that integrates ryu (or other speakers) with neutron [2].  Dvr was also 
a step in this direction.

I'd like to invite you to the l3 weekly meeting [3] to discuss further.  I'm 
very happy to see interest in this area and have someone new to collaborate.

Carl

[1] https://review.openstack.org/#/c/88619/
[2] https://review.openstack.org/#/c/125401/
[3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

On Dec 3, 2014 4:04 PM, "Ryan Clevenger" 
mailto:ryan.cleven...@rackspace.com>> wrote:
Hi,

At Rackspace, we have a need to create a higher level networking service 
primarily for the purpose of creating a Floating IP solution in our 
environment. The current solutions for Floating IPs, being tied to plugin 
implementations, does not meet our needs at scale for the following reasons:

1. Limited endpoint H/A mainly targeting failover only and not multi-active 
endpoints,
2. Lack of noisy neighbor and DDOS mitigation,
3. IP fragmentation (with cells, public connectivity is terminated inside each 
cell leading to fragmentation and IP stranding when cell CPU/Memory use doesn't 
line up with allocated IP blocks. Abstracting public connectivity away from 
nova installations allows for much more efficient use of those precious IPv4 
blocks).
4. Diversity in transit (multiple encapsulation and transit types on a per 
floating ip basis).

We realize that network infrastructures are often unique and such a solution 
would likely diverge from provider to provider. However, we would love to 
collaborate with the community to see if such a project could be built that 
would meet the needs of providers at scale. We believe that, at its core, this 
solution would boil down to terminating north<->south traffic temporarily at a 
massively horizontally scalable centralized core and then encapsulating traffic 
east<->west to a specific host based on the association setup via the current 
L3 router's extension's 'floatingips' resource.

Our current idea, involves using Open vSwitch for header rewriting and tunnel 
encapsulation combined with a set of Ryu applications for management:

https://i.imgur.com/bivSdcC.png

The Ryu application uses Ryu's BGP support to announce up to the Public Routing 
layer individual floating ips (/32's or /128's) which are then summarized and 
announced to the rest of the datacenter. If a particular floating ip is 
experiencing unusually large traffic (DDOS, slashdot effect, etc.), the Ryu 
application could change the announcements up to the Public layer to shift that 
traffic to dedicated hosts setup for that purpose. It also announces a single 
/32 "Tunnel Endpoint" ip downstream to the TunnelNet Routing system which 
provides transit to and from the cells and their hypervisors. Since traffic 
from either direction can then end up on any of the FLIP hosts, a simple flow 
table to modify the MAC and IP in either the SRC or DST fields (depending on 
traffic direction) allows the system to be completely stateless. We have proven 
this out (with static routing and flows) to work reliably in a small lab setup.

On the hypervisor side, we currently plumb networks into separate OVS bridges. 
Another Ryu application would control the bridge that handles overlay 
networking to selectively divert traffic destined for the default gateway up to 
the FLIP NAT systems, taking into account any configured logical routing and 
local L2 traffic to pass out into the existing overlay fabric undisturbed.

Adding in support for L2VPN EVPN 
(https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN Overlay 
(https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the Ryu BGP 
speaker will allow the hypervisor side Ryu application to advertise up to the 
FLIP system reachability information to take into account VM failover, 
live-migrate, and supported encapsulation types. We believe that decoupling the 
tunnel endpoint discovery from the control plane (Nova/Neutron) will provide 
for a more robust solution as well as al

Re: [openstack-dev] [Mistral] Query on creating multiple resources

2014-12-08 Thread Zane Bitter

On 08/12/14 09:41, Sushma Korati wrote:

Can we create multiple resources using a single task, like multiple
keypairs or security-groups or networks etc?


Define them in a Heat template and create the Heat stack as a single task.

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] CI report : 1/11/2014 - 4/12/2014

2014-12-08 Thread Derek Higgins
On 04/12/14 13:37, Dan Prince wrote:
> On Thu, 2014-12-04 at 11:51 +, Derek Higgins wrote:
>> A month since my last update, sorry my bad
>>
>> since the last email we've had 5 incidents causing ci failures
>>
>> 26/11/2014 : Lots of ubuntu jobs failed over 24 hours (maybe half)
>> - We seem to suffer any time an ubuntu mirror isn't in sync causing hash
>> mismatch errors. For now I've pinned DNS on our proxy to a specific
>> server so we stop DNS round robining
> 
> This sound fine to me. I personally like the model where you pin to a
> specific mirror, perhaps one that is geographically closer to your
> datacenter. This also makes Squid caching (in the rack) happier in some
> cases.
> 
>>
>> 21/11/2014 : All tripleo jobs failed for about 16 hours
>> - Neutron started asserting that local_ip be set to a valid ip address,
>> on the seed we had been leaving it blank
>> - Cinder moved to using  oslo.concurreny which in turn requires that
>> lock_path be set, we are now setting it
> 
> 
> Thinking about how we might catch these ahead of time with our limited
> resources ATM. These sorts of failures all seem related to configuration
> and or requirements changes. I wonder if we were to selectively
> (automatically) run check experimental jobs on all reviews with
> associated tickets which have either doc changes or modify
> requirements.txt. Probably a bit of work to pull this off but if we had
> a report containing these results "coming down the pike" we might be
> able to catch them ahead of time.
Yup, this sounds like it could be beneficial, alternatively if we soon
have the capacity to run on more projects (capacity is increasing) we'll
be running on all reviews and we'll be able to generate the report your
talking about, either way we should do something like this soon.

> 
> 
>>
>> 8/11/2014 : All fedora tripleo jobs failed for about 60 hours (over a
>> weekend)
>> - A url being accessed on  https://bzr.linuxfoundation.org is no longer
>> available, we removed the dependency
>>
>> 7/11/2014 : All tripleo tests failed for about 24 hours
>> - Options were removed from nova.conf that had been deprecated (although
>> no deprecation warnings were being reported), we were still using these
>> in tripleo
>>
>> as always more details can be found here
>> https://etherpad.openstack.org/p/tripleo-ci-breakages
> 
> Thanks for sending this out! Very useful.
no problem
> 
> Dan
> 
>>
>> thanks,
>> Derek.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday December 9th at 19:00 UTC

2014-12-08 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday December 9th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed it, meeting log and minutes from the last
meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-02-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-02-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-02-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] HA issues

2014-12-08 Thread Dulko, Michal
Hi all!

At the summit during crossproject HA session there were multiple Cinder issues 
mentioned. These can be found in this etherpad: 
https://etherpad.openstack.org/p/kilo-crossproject-ha-integration

Is there any ongoing effort to fix these issues? Is there an idea how to 
approach any of them?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB][api] Hello from the API WG

2014-12-08 Thread Everett Toews
Hello MagnetoDB!

During the latest meeting [1] of the API Working Group (WG) we noticed that 
MagnetoDB made use of the APIImpact flag [2]. That’s excellent and exactly how 
we were hoping the use of flag as a discovery mechanism would work!

We were wondering if the MagentoDB team would like to designate a cross-project 
liaison [3] for the API WG?

We would communicate with that person a bit more closely and figure out how we 
can best help your project. Perhaps they could attend an API WG Meeting [4] to 
get started.

One thing that came up during the meeting was my suggestion that, if MagnetoDB 
had an API definition (like Swagger [5]), we could review the API design 
independently of the source code that implements the API. There are many other 
benefits of an API definition for documentation, testing, validation, and 
client creation. 

Does an API definition exist for the MagnetoDB API or would you be interested 
in creating one?

Either way we’d like to hear your thoughts on the subject.

Cheers,
Everett

P.S. Just to set expectations properly, please note that review of the API by 
the WG does not endorse the project in any way. We’re just trying to help 
design better APIs that are consistent with the rest of the OpenStack APIs.

[1] 
http://eavesdrop.openstack.org/meetings/api_wg/2014/api_wg.2014-12-04-16.01.html
[2] https://review.openstack.org/#/c/138059/
[3] https://wiki.openstack.org/wiki/CrossProjectLiaisons#API_Working_Group
[4] https://wiki.openstack.org/wiki/Meetings/API-WG
[5] http://swagger.io/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Nailgun] Web framework

2014-12-08 Thread Ryan Petrello
Feel free to ask any questions you have in #pecanpy on IRC;  I can answer a lot
more quickly than researching docs, and if you have a special need, I can
usually accommodate with changes to Pecan (I've done so with several OpenStack
projects in the past).
On 12/08/14 02:10 PM, Nikolay Markov wrote:
> > Yes, and it's been 4 days since last message in this thread and no
> > objections, so it seems
> > that Pecan in now our framework-of-choice for Nailgun and future
> > apps/projects.
> 
> We still need some research to do about technical issues and how easy
> we can move to Pecan. Thanks to Ryan, we now have multiple links to
> solutions and docs on discussed issues. I guess we'll dedicate some
> engineer(s) responsible for doing such a research and then make all
> our decisions on subject.
> 
> On Mon, Dec 8, 2014 at 11:07 AM, Sebastian Kalinowski
>  wrote:
> > 2014-12-04 14:01 GMT+01:00 Igor Kalnitsky :
> >>
> >> Ok, guys,
> >>
> >> It became obvious that most of us either vote for Pecan or abstain from
> >> voting.
> >
> >
> > Yes, and it's been 4 days since last message in this thread and no
> > objections, so it seems
> > that Pecan in now our framework-of-choice for Nailgun and future
> > apps/projects.
> >
> >>
> >>
> >> So I propose to stop fighting this battle (Flask vs Pecan) and start
> >> thinking about moving to Pecan. You know, there are many questions
> >> that need to be discussed (such as 'should we change API version' or
> >> 'should be it done iteratively or as one patchset').
> >
> >
> > IMHO small, iterative changes are rather obvious.
> > For other questions maybe we need (draft of ) a blueprint and a separate
> > mail thread?
> >
> >>
> >>
> >> - Igor
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Best regards,
> Nick Markov
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Query on creating multiple resources

2014-12-08 Thread Sushma Korati
Hello All,


Can we create multiple resources using a single task, like multiple keypairs or 
security-groups or networks etc?


I am trying to extend the existing "create_vm" workflow, such that it accepts a 
list of security groups. In the workflow, before create_vm I am trying to 
create the security group if it does not exist.


Just to test the security group functionality individually I wrote a sample 
workflow:



version: '2.0'

name: secgroup_actions

workflows:
  create_security_group:
type: direct
input:
  - name
  - description

tasks:
  create_secgroups:
action: nova.security_groups_create name={$.name} 
description={$.description}


This is a straight forward workflow, but I am unable figure out how to pass 
multiple security groups to the above workflow.

I tried passing multiple dicts in context file but it did not work.

--

{
  "name": "secgrp1",
  "description": "using mistral"
},
{
  "name": "secgrp2",
  "description": "using mistral"
}

-

Is there any way to modify this workflow such that it creates more than one 
security group?

Please help.


Regards,

Sushma


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Host health monitoring

2014-12-08 Thread Roman Dobosz
On Wed, 3 Dec 2014 08:44:57 +0100
Roman Dobosz  wrote:

> I've just started to work on the topic of detection if host is alive or
> not: https://blueprints.launchpad.net/nova/+spec/host-health-monitoring
> 
> I'll appreciate any comments :)

I've submitted another blueprint, which is closely bounded with previous one: 
https://blueprints.launchpad.net/nova/+spec/pacemaker-servicegroup-driver

The idea behind those two blueprints is to enable Nova to be aware of host 
status, not only services that run on such. Bringing Pacemaker as a driver for 
servicegroup will provide us with two things: fencing and reliable information 
about host state, therefore we can avoid situations, where some actions will 
misinterpret information like service state with host state.

Comments?

-- 
Kind regards
Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-12-08 Thread Sandy Walsh

>From: Sandy Walsh [sandy.wa...@rackspace.com] Monday, December 01, 2014 9:29 AM
>
>>From: Duncan Thomas [duncan.tho...@gmail.com]
>>Sent: Sunday, November 30, 2014 5:40 AM
>>To: OpenStack Development Mailing List
>>Subject: Re: [openstack-dev] Where should Schema files live?
>>
>>Duncan Thomas
>>On Nov 27, 2014 10:32 PM, "Sandy Walsh"  wrote:
>>>
>>> We were thinking each service API would expose their schema via a new 
>>> /schema resource (or something). Nova would expose its schema. Glance its 
>>> own. etc. This would also work well for installations still using older 
>>> deployments.
>>This feels like externally exposing info that need not be external (since the 
>>notifications are not external to the deploy) and it sounds like it will 
>>potentially leak fine detailed version and maybe deployment config details 
>>that you don't want to make public - either for commercial reasons or to make 
>>targeted attacks harder
>>
>
>Yep, good point. Makes a good case for standing up our own service or just 
>relying on the tarballs being in a well know place.

Hmm, I wonder if it makes sense to limit the /schema resource to service 
accounts. Expose it by role.

There's something in the back of my head that doesn't like calling out to the 
public API though. Perhaps unfounded.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][ThirdPartyCI] Need help setting up CI

2014-12-08 Thread Eduard Matei
Resending this to dev ML as it seems i get quicker response :)

I created a job in Jenkins, added as Build Trigger: "Gerrit Event: Patchset
Created", chose as server the configured Gerrit server that was previously
tested, then added the project openstack-dev/sandbox and saved.
I made a change on dev sandbox repo but couldn't trigger my job.

Any ideas?

Thanks,
Eduard

On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei <
eduard.ma...@cloudfounders.com> wrote:

> Hello everyone,
>
> Thanks to the latest changes to the creation of service accounts process
> we're one step closer to setting up our own CI platform for Cinder.
>
> So far we've got:
> - Jenkins master (with Gerrit plugin) and slave (with DevStack and our
> storage solution)
> - Service account configured and tested (can manually connect to
> review.openstack.org and get events and publish comments)
>
> Next step would be to set up a job to do the actual testing, this is where
> we're stuck.
> Can someone please point us to a clear example on how a job should look
> like (preferably for testing Cinder on Kilo)? Most links we've found are
> broken, or tools/scripts are no longer working.
> Also, we cannot change the Jenkins master too much (it's owned by Ops team
> and they need a list of tools/scripts to review before installing/running
> so we're not allowed to experiment).
>
> Thanks,
> Eduard
>
> --
>
> *Eduard Biceri Matei, Senior Software Developer*
> www.cloudfounders.com
>  | eduard.ma...@cloudfounders.com
>
>
>
> *CloudFounders, The Private Cloud Software Company*
>
> Disclaimer:
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed.
> If you are not the named addressee or an employee or agent responsible for 
> delivering this message to the named addressee, you are hereby notified that 
> you are not authorized to read, print, retain, copy or disseminate this 
> message or any part of it. If you have received this email in error we 
> request you to notify us by reply e-mail and to delete all electronic files 
> of the message. If you are not the intended recipient you are notified that 
> disclosing, copying, distributing or taking any action in reliance on the 
> contents of this information is strictly prohibited.
> E-mail transmission cannot be guaranteed to be secure or error free as 
> information could be intercepted, corrupted, lost, destroyed, arrive late or 
> incomplete, or contain viruses. The sender therefore does not accept 
> liability for any errors or omissions in the content of this message, and 
> shall have no liability for any loss or damage suffered by the user, which 
> arise as a result of e-mail transmission.
>
>


-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] V3 API support

2014-12-08 Thread Sergey Nikitin
Thank you guys for helpful information. Alex, I'll remove v2 schema and add
v2.1 support

2014-12-08 8:01 GMT+03:00 Alex Xu :

> I think Chris is on vacation. We move V3 API to V2.1. V2.1 have some
> improvement compare to V2. You can find more detail at
> http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/v2-on-v3-api.html
>
> We need support instance tag for V2.1. And in your patch, we needn't
> json-schema for V2, just need for V2.1.
>
> Thanks
> Alex
>
> 2014-12-04 20:50 GMT+08:00 Sergey Nikitin :
>
>> Hi, Christopher,
>>
>> I working on API extension for instance tags (
>> https://review.openstack.org/#/c/128940/). Recently one reviewer asked
>> me to add  V3 API support. I talked with Jay Pipes about it and he told me
>> that V3 API became useless. So I wanted to ask you and our community: "Do
>> we need to support v3 API in future nova patches?"
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party

2014-12-08 Thread trinath.soman...@freescale.com
With Kurt Taylor. +1

Very nice idea to start with.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Kurt Taylor [mailto:kurt.r.tay...@gmail.com]
Sent: Friday, December 05, 2014 8:39 PM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-in...@lists.openstack.org
Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for Additional 
Meeting for third-party

In my opinion, further discussion is needed. The proposal on the table is to 
have 2 weekly meetings, one at the existing time of 1800UTC on Monday and, also 
in the same week, to have another meeting at 0800 UTC on Tuesday.

Here are some of the problems that I see with this approach:

1. Meeting content: Having 2 meetings per week is more than is needed at this 
stage of the working group. There just isn't enough meeting content to justify 
having two meetings every week.

2. Decisions: Any decision made at one meeting will potentially be undone at 
the next, or at least not fully explained. It will be difficult to keep 
consistent direction with the overall work group.

3. Meeting chair(s): Currently we do not have a commitment for a long-term 
chair of this new second weekly meeting. I will not be able to attend this new 
meeting at the proposed time.

4. Current meeting time: I am not aware of anyone that likes the current time 
of 1800 UTC on Monday. The current time is the main reason it is hard for EU 
and APAC CI Operators to attend.

My proposal was to have only 1 meeting per week at alternating times, just as 
other work groups have done to solve this problem. (See examples at: 
https://wiki.openstack.org/wiki/Meetings)  I volunteered to chair, then ask 
other CI Operators to chair as the meetings evolved. The meeting times could be 
any between 1300-0300 UTC. That way, one week we are good for US and Europe, 
the next week for APAC.

Kurt Taylor (krtaylor)


On Wed, Dec 3, 2014 at 11:10 PM, 
trinath.soman...@freescale.com 
mailto:trinath.soman...@freescale.com>> wrote:
+1.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 
4048

-Original Message-
From: Anita Kuno [mailto:ante...@anteaya.info]
Sent: Thursday, December 04, 2014 3:55 AM
To: 
openstack-in...@lists.openstack.org
Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for Additional 
Meeting for third-party

On 12/03/2014 03:15 AM, Omri Marcovitch wrote:
> Hello Anteaya,
>
> A meeting between 8:00 - 16:00 UTC time will be great (Israel).
>
>
> Thanks
> Omri
>
> -Original Message-
> From: Joshua Hesketh 
> [mailto:joshua.hesk...@rackspace.com]
> Sent: Wednesday, December 03, 2014 9:04 AM
> To: He, Yongli; OpenStack Development Mailing List (not for usage
> questions); 
> openstack-in...@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for
> Additional Meeting for third-party
>
> Hey,
>
> 0700 -> 1000 UTC would work for me most weeks fwiw.
>
> Cheers,
> Josh
>
> Rackspace Australia
>
> On 12/3/14 11:17 AM, He, Yongli wrote:
>> anteaya,
>>
>> UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china.
>>
>> if there is no time slot there, just pick up any time between UTC
>> 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and
>> dinner.)
>>
>> Yongi He
>> -Original Message-
>> From: Anita Kuno [mailto:ante...@anteaya.info]
>> Sent: Tuesday, December 02, 2014 4:07 AM
>> To: openstack Development Mailing List;
>> openstack-in...@lists.openstack.org
>> Subject: [openstack-dev] [third-party]Time for Additional Meeting for
>> third-party
>>
>> One of the actions from the Kilo Third-Party CI summit session was to start 
>> up an additional meeting for CI operators to participate from non-North 
>> American time zones.
>>
>> Please reply to this email with times/days that would work for you. The 
>> current third party meeting is on Mondays at 1800 utc which works well since 
>> Infra meetings are on Tuesdays. If we could find a time that works for 
>> Europe and APAC that is also on Monday that would be ideal.
>>
>> Josh Hesketh has said he will try to be available for these meetings, he is 
>> in Australia.
>>
>> Let's get a sense of what days and timeframes work for those interested and 
>> then we can narrow it down and pick a channel.
>>
>> Thanks everyone,
>> Anita.
>>

Okay first of all thanks to everyone who replied.

Again, to clarify, the purpose of this thread has been to find a suitable 
additional third-party meeting time geared towards folks in EU and APAC. We 
live on a sphere, there is no time that will suit everyone.

It looks like we are converging on 0800 UTC as a time and I am going to su

Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?

2014-12-08 Thread Neil Jerram
Hi Wu,

As I've also written in a comment at
https://review.openstack.org/#/c/130732/6, it appears that
VIF_TYPE_VHOSTUSER is already covered by the approved spec at
https://review.openstack.org/#/c/96138/.

Given that, is there any reason to consider adding VIF_TYPE_VHOSTUSER
into the VIF_TYPE_TAP spec as well?

Thanks,
Neil


Wuhongning  writes:

> Hi Neil,
>
> @Neil, could you please also add VIF_TYPE_VHOSTUSER in your spec (as I
> commented on it)? There has been active VHOSTUSER discuss in JUNO nova
> BP, and it's the same usefulness as VIF_TYPE_TAP.
>
> Best Regards
> Wu
> 
> From: Neil Jerram [neil.jer...@metaswitch.com]
> Sent: Saturday, December 06, 2014 10:51 AM
> To: Kevin Benton
> Cc: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][neutron] Boundary between Nova and  
> Neutron involvement in network setup?
>
> Kevin Benton  writes:
>
>> I see the difference now.
>> The main concern I see with the NOOP type is that creating the virtual
>> interface could require different logic for certain hypervisors. In
>> that case Neutron would now have to know things about nova and to me
>> it seems like that's slightly too far the other direction.
>
> Many thanks, Kevin.  I see this now too, as I've just written more fully
> in my response to Ian.
>
> Based on your and others' insight, I've revised and reuploaded my
> VIF_TYPE_TAP spec, and hope it's a lot clearer now.
>
> Regards,
>  Neil
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Fixing the console.log grows forever bug.

2014-12-08 Thread Daniel P. Berrange
On Mon, Dec 08, 2014 at 01:20:19PM +, Dave Walker wrote:
> On 8 December 2014 at 10:33, Daniel P. Berrange  wrote:
> > On Sat, Dec 06, 2014 at 04:38:52PM +1100, Tony Breeds wrote:
> >> Hi All,
> >> In the most recent team meeting we briefly discussed: [1] where the
> >> console.log grows indefinitely, eventually causing guest stalls.  I 
> >> mentioned
> >> that I was working on a spec to fix this issue.
> >>
> >> My original plan was fairly similar to [2]  In that we'd switch 
> >> libvirt/qemu to
> >> using a unix domain socket and write a simple helper to read from that 
> >> socket
> >> and write to disk.  That helper would close and reopen the on disk file 
> >> upon
> >> receiving a HUP (so logrotate just works).   Life would be good. and we 
> >> could
> >> all move on.
> >>
> >> However I was encouraged to investigate fixing this in qemu, such that qemu
> >> could process the HUP and make life better for all.  This is certainly 
> >> doable
> >> and I'm happy[3] to do this work.  I've floated the idea past qemu-devel 
> >> and
> >> they seem okay with the idea.  My main concern is in lag and supporting
> >> qemu/libvirt that can't handle this option.
> >
> > As mentioned in my reply on qemu-devel, I think the right long term solution
> > for this is to fix it in libvirt. We have a general security goal to remove
> > QEMU's ability to open any files whatsoever, instead having it receive all
> > host resources as pre-opened file descriptors from libvirt. So what we
> > anticipate is a new libvirt daemon for processing logs, virtlogd. Anywhere
> > where QEMU currently gets a file to log to ( devices, and its
> > stdout/stderr), it would instead be given a FD that's connected to virtlogd.
> > virtlogd would simply write the data out to file & would be able to close
> > & re-open files to integrate with logrotate.
> >
> >> For the sake of discussion  I'll lay out my best guess right now on fixing 
> >> this
> >> in qemu.
> >>
> >> qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix 
> >> I'm
> >> proposing would be available in qemu 2.3.0 which I think will be available 
> >> in
> >> June/July 2015.  So we'd be into 'L' development before this fix is 
> >> available
> >> and possibly 'M' before the community distros (Fedora and  Ubuntu)[5] 
> >> include
> >> and almost certainly longer for Enterprise distros.  Along with the qemu
> >> development I expect there to be some libvirt development as well but 
> >> right now
> >> I don't think that's critical to the feature or this discussion.
> >>
> >> So if that timeline is approximately correct:
> >>
> >> - Can we wait this long to fix the bug?  As opposed to having it squashed 
> >> in Kilo.
> >> - What do we do in nova for the next ~12 months while know there isn't a 
> >> qemu to fix this?
> >> - Then once there is a qemu that fixes the issue, do we just say 'thou 
> >> must use
> >>   qemu 2.3.0' or would nova still need to support old and new qemu's ?
> >
> > FWIW, by comparison libvirt is on a monthly release schedule, so a fix done 
> > in
> > libvirt has potential to be available sooner, though obviously there's 
> > bigger
> > dev work to be done in libvirt for this.
> >
> > Regards,
> > Daniel
> 
> Hey,
> 
> This thread started by suggesting having a scheduled task to read from
> a unix socket.  I don't think this can really be considered an
> acceptable fix, as the guest does indeed lock up when the buffer is
> full.
> 
> Initially, I proposed a quick fix for this back in 2011 which provided
> a config option to enable a kernel level ring buffer via a
> non-mainline module called emlog.  This was not merged for
> understandable reasons.  (pre gerrit) -
> http://bazaar.launchpad.net/~davewalker/nova/832507_with_emlog/revision/1509/nova/virt/libvirt/connection.py
> 
> Later that same year, Robie Basak presented a change which introduced
> similar logic ringbuffer support in the nova code itself making use of
> eventlet. This seems quite a reasonable fix, but there was concern it
> might lock-up guests.. https://review.openstack.org/#/c/706/
> 
> I think shortly after this, it was pretty widely agreed that fixing
> this in Nova is not the correct layer.  Personally, I struggle
> thinking qemu or libvirt is right layer either.  I can't think that
> treating a console as a flat log file is the best default behavior.
> 
> I still quite like the emlog approach, as having a ringbuffer device
> type in the kernel provides exactly what we need and is pretty simple
> to implement.
> 
> Does anyone know if this generic ringbuffer kernel support was
> proposed to mainline kernel?

The emlog approach means the data would only ever be stored in RAM on the
host, so in the event of a host reboot/crash you loose all guest logs.
While that might be ok for some people, I think we need to support the
persistent store of the logs on disk for historical / auditing record
purposes.

We don't need kernel support to provide a ring buffer. An

Re: [openstack-dev] [nova] Fixing the console.log grows forever bug.

2014-12-08 Thread Dave Walker
On 8 December 2014 at 10:33, Daniel P. Berrange  wrote:
> On Sat, Dec 06, 2014 at 04:38:52PM +1100, Tony Breeds wrote:
>> Hi All,
>> In the most recent team meeting we briefly discussed: [1] where the
>> console.log grows indefinitely, eventually causing guest stalls.  I mentioned
>> that I was working on a spec to fix this issue.
>>
>> My original plan was fairly similar to [2]  In that we'd switch libvirt/qemu 
>> to
>> using a unix domain socket and write a simple helper to read from that socket
>> and write to disk.  That helper would close and reopen the on disk file upon
>> receiving a HUP (so logrotate just works).   Life would be good. and we could
>> all move on.
>>
>> However I was encouraged to investigate fixing this in qemu, such that qemu
>> could process the HUP and make life better for all.  This is certainly doable
>> and I'm happy[3] to do this work.  I've floated the idea past qemu-devel and
>> they seem okay with the idea.  My main concern is in lag and supporting
>> qemu/libvirt that can't handle this option.
>
> As mentioned in my reply on qemu-devel, I think the right long term solution
> for this is to fix it in libvirt. We have a general security goal to remove
> QEMU's ability to open any files whatsoever, instead having it receive all
> host resources as pre-opened file descriptors from libvirt. So what we
> anticipate is a new libvirt daemon for processing logs, virtlogd. Anywhere
> where QEMU currently gets a file to log to ( devices, and its
> stdout/stderr), it would instead be given a FD that's connected to virtlogd.
> virtlogd would simply write the data out to file & would be able to close
> & re-open files to integrate with logrotate.
>
>> For the sake of discussion  I'll lay out my best guess right now on fixing 
>> this
>> in qemu.
>>
>> qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix I'm
>> proposing would be available in qemu 2.3.0 which I think will be available in
>> June/July 2015.  So we'd be into 'L' development before this fix is available
>> and possibly 'M' before the community distros (Fedora and  Ubuntu)[5] include
>> and almost certainly longer for Enterprise distros.  Along with the qemu
>> development I expect there to be some libvirt development as well but right 
>> now
>> I don't think that's critical to the feature or this discussion.
>>
>> So if that timeline is approximately correct:
>>
>> - Can we wait this long to fix the bug?  As opposed to having it squashed in 
>> Kilo.
>> - What do we do in nova for the next ~12 months while know there isn't a 
>> qemu to fix this?
>> - Then once there is a qemu that fixes the issue, do we just say 'thou must 
>> use
>>   qemu 2.3.0' or would nova still need to support old and new qemu's ?
>
> FWIW, by comparison libvirt is on a monthly release schedule, so a fix done in
> libvirt has potential to be available sooner, though obviously there's bigger
> dev work to be done in libvirt for this.
>
> Regards,
> Daniel

Hey,

This thread started by suggesting having a scheduled task to read from
a unix socket.  I don't think this can really be considered an
acceptable fix, as the guest does indeed lock up when the buffer is
full.

Initially, I proposed a quick fix for this back in 2011 which provided
a config option to enable a kernel level ring buffer via a
non-mainline module called emlog.  This was not merged for
understandable reasons.  (pre gerrit) -
http://bazaar.launchpad.net/~davewalker/nova/832507_with_emlog/revision/1509/nova/virt/libvirt/connection.py

Later that same year, Robie Basak presented a change which introduced
similar logic ringbuffer support in the nova code itself making use of
eventlet. This seems quite a reasonable fix, but there was concern it
might lock-up guests.. https://review.openstack.org/#/c/706/

I think shortly after this, it was pretty widely agreed that fixing
this in Nova is not the correct layer.  Personally, I struggle
thinking qemu or libvirt is right layer either.  I can't think that
treating a console as a flat log file is the best default behavior.

I still quite like the emlog approach, as having a ringbuffer device
type in the kernel provides exactly what we need and is pretty simple
to implement.

Does anyone know if this generic ringbuffer kernel support was
proposed to mainline kernel?

--
Kind Regards,
Dave Walker

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-08 Thread Sean Dague
On 12/07/2014 12:02 PM, Jay Pipes wrote:
> On 12/07/2014 04:19 AM, Michael Still wrote:
>> On Sun, Dec 7, 2014 at 7:03 PM, Gary Kotton  wrote:
>>> On 12/6/14, 7:42 PM, "Jay Pipes"  wrote:
>>
>> [snip]
>>
 -1 on pixelbeat, since he's been active in reviews on
 various things AFAICT in the last 60-90 days and seems to be still a
 considerate reviewer in various areas.
>>>
>>> I agree -1 for Padraig
>>
>> I'm going to be honest and say I'm confused here.
>>
>> We've always said we expect cores to maintain an average of two
>> reviews per day. That's not new, nor a rule created by me. Padraig is
>> a great guy, but has been working on other things -- he's done 60
>> reviews in the last 60 days -- which is about half of what we expect
>> from a core.
>>
>> Are we talking about removing the two reviews a day requirement? If
>> so, how do we balance that with the widespread complaints that core
>> isn't keeping up with its workload? We could add more people to core,
>> but there is also a maximum practical size to the group if we're going
>> to keep everyone on the same page, especially when the less active
>> cores don't generally turn up to our IRC meetings and are therefore
>> more "expensive" to keep up to date.
>>
>> How can we say we are doing our best to keep up with the incoming
>> review workload if all reviewers aren't doing at least the minimum
>> level of reviews?
> 
> Personally, I care more about the quality of reviews than the quantity.
> That said, I understand that we have a small number of core reviewers
> relative to the number of open reviews in Nova (~650-700 open reviews
> most days) and agree with Dan Smith that 2 reviews per day doesn't sound
> like too much of a hurdle for core reviewers.
> 
> The reason I think it's important to keep Padraig as a core is that he
> has done considerate, thoughtful code reviews, albeit in a smaller
> quantity. By saying we only look at the number of reviews in our
> estimation of keeping contributors on the core team, we are
> incentivizing the wrong behaviour, IMO. We should be pushing that the
> thought that goes into reviews is more important than the sheer number
> of reviews.
> 
> Is it critical that we get more eyeballs reviewing code? Yes, absolutely
> it is. Is it critical that we get more reviews from core reviewers as
> well as non-core reviewers. Yes, absolutely.
> 
> Bottom line, we need to balance between quality and quantity, and
> kicking out a core reviewer who has quality code reviews because they
> don't have that many of them sends the wrong message, IMO.

Maybe. I'm kind of torn on it.

I think we need to separate "providing insightful reviews" with
"actively engaged in Nova". I feel like there are tons of community
members that provide insightful reviews that we hold a patch until we've
seen their relevant +1 in an area of their expertise. If our concern is
missing expertise, then I don't think this changes things.

I could go either way on this one in particular. But I'm also happy to
drop and move forward. Padraig's commit history in OpenStack atm show's
that his focus right now isn't upstream. He's not currently very active
in IRC regularly, on the ML, triaging bugs, or fixing bugs, which are
all ways we know folks are engaged enough to have a feel where the norms
of Nova have evolved. Which is cool, folks change focus.

I think in the past we've erred very heavily on making it tough to let
people into the core reviewer team because it's so hard to remove
people. Which doesn't help us grow, we stagnate. I think the fear of a
fight on removal of core reviewers every time makes people even more
cautious in supporting adds.

Maybe this is erring in the other direction, but I'm happy to take
Michael's judgement call on that that it isn't. If Padraig gets more
engaged, I'd be happy adding him back in.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party

2014-12-08 Thread Erlon Cruz
I agree that 2 meetings per week will mess things up. 1+ for alternating
meetings.

On Fri, Dec 5, 2014 at 1:08 PM, Kurt Taylor  wrote:

> In my opinion, further discussion is needed. The proposal on the table is
> to have 2 weekly meetings, one at the existing time of 1800UTC on Monday
> and, also in the same week, to have another meeting at 0800 UTC on Tuesday.
>
> Here are some of the problems that I see with this approach:
>
> 1. Meeting content: Having 2 meetings per week is more than is needed at
> this stage of the working group. There just isn't enough meeting content to
> justify having two meetings every week.
>
> 2. Decisions: Any decision made at one meeting will potentially be undone
> at the next, or at least not fully explained. It will be difficult to keep
> consistent direction with the overall work group.
>
> 3. Meeting chair(s): Currently we do not have a commitment for a long-term
> chair of this new second weekly meeting. I will not be able to attend this
> new meeting at the proposed time.
>
> 4. Current meeting time: I am not aware of anyone that likes the current
> time of 1800 UTC on Monday. The current time is the main reason it is hard
> for EU and APAC CI Operators to attend.
>
> My proposal was to have only 1 meeting per week at alternating times, just
> as other work groups have done to solve this problem. (See examples at:
> https://wiki.openstack.org/wiki/Meetings)  I volunteered to chair, then
> ask other CI Operators to chair as the meetings evolved. The meeting times
> could be any between 1300-0300 UTC. That way, one week we are good for US
> and Europe, the next week for APAC.
>
> Kurt Taylor (krtaylor)
>
>
> On Wed, Dec 3, 2014 at 11:10 PM, trinath.soman...@freescale.com <
> trinath.soman...@freescale.com> wrote:
>
>> +1.
>>
>> --
>> Trinath Somanchi - B39208
>> trinath.soman...@freescale.com | extn: 4048
>>
>> -Original Message-
>> From: Anita Kuno [mailto:ante...@anteaya.info]
>> Sent: Thursday, December 04, 2014 3:55 AM
>> To: openstack-in...@lists.openstack.org
>> Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for
>> Additional Meeting for third-party
>>
>> On 12/03/2014 03:15 AM, Omri Marcovitch wrote:
>> > Hello Anteaya,
>> >
>> > A meeting between 8:00 - 16:00 UTC time will be great (Israel).
>> >
>> >
>> > Thanks
>> > Omri
>> >
>> > -Original Message-
>> > From: Joshua Hesketh [mailto:joshua.hesk...@rackspace.com]
>> > Sent: Wednesday, December 03, 2014 9:04 AM
>> > To: He, Yongli; OpenStack Development Mailing List (not for usage
>> > questions); openstack-in...@lists.openstack.org
>> > Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for
>> > Additional Meeting for third-party
>> >
>> > Hey,
>> >
>> > 0700 -> 1000 UTC would work for me most weeks fwiw.
>> >
>> > Cheers,
>> > Josh
>> >
>> > Rackspace Australia
>> >
>> > On 12/3/14 11:17 AM, He, Yongli wrote:
>> >> anteaya,
>> >>
>> >> UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for
>> china.
>> >>
>> >> if there is no time slot there, just pick up any time between UTC
>> >> 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and
>> >> dinner.)
>> >>
>> >> Yongi He
>> >> -Original Message-
>> >> From: Anita Kuno [mailto:ante...@anteaya.info]
>> >> Sent: Tuesday, December 02, 2014 4:07 AM
>> >> To: openstack Development Mailing List;
>> >> openstack-in...@lists.openstack.org
>> >> Subject: [openstack-dev] [third-party]Time for Additional Meeting for
>> >> third-party
>> >>
>> >> One of the actions from the Kilo Third-Party CI summit session was to
>> start up an additional meeting for CI operators to participate from
>> non-North American time zones.
>> >>
>> >> Please reply to this email with times/days that would work for you.
>> The current third party meeting is on Mondays at 1800 utc which works well
>> since Infra meetings are on Tuesdays. If we could find a time that works
>> for Europe and APAC that is also on Monday that would be ideal.
>> >>
>> >> Josh Hesketh has said he will try to be available for these meetings,
>> he is in Australia.
>> >>
>> >> Let's get a sense of what days and timeframes work for those
>> interested and then we can narrow it down and pick a channel.
>> >>
>> >> Thanks everyone,
>> >> Anita.
>> >>
>>
>> Okay first of all thanks to everyone who replied.
>>
>> Again, to clarify, the purpose of this thread has been to find a suitable
>> additional third-party meeting time geared towards folks in EU and APAC. We
>> live on a sphere, there is no time that will suit everyone.
>>
>> It looks like we are converging on 0800 UTC as a time and I am going to
>> suggest Tuesdays. We have very little competition for space at that date
>> + time combination so we can use #openstack-meeting (I have already
>> booked the space on the wikipage).
>>
>> So barring further discussion, see you then!
>>
>> Thanks everyone,
>> Anita.
>>
>> ___
>> OpenStack-Infra mailing li

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-08 Thread Murugan, Visnusaran

Hi Zane & Michael,

Please have a look @ 
https://etherpad.openstack.org/p/execution-stream-and-aggregator-based-convergence

Updated with a combined approach which does not require persisting graph and 
backup stack removal. This approach reduces DB queries by waiting for 
completion notification on a topic. The drawback I see is that delete stack 
stream will be huge as it will have the entire graph. We can always dump such 
data in ResourceLock.data Json and pass a simple flag "load_stream_from_db" to 
converge RPC call as a workaround for delete operation.

To Stop current stack operation, we will use your traversal_id based approach. 
If in case you feel Aggregator model creates more queues, then we might have to 
poll DB to get resource status. (Which will impact performance adversely :) )


Lock table: name(Unique - Resource_id), stack_id, engine_id, data (Json to 
store stream dict)
   
Your thoughts.
Vishnu (irc: ckmvishnu)
Unmesh (irc: unmeshg)


-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com] 
Sent: Thursday, December 4, 2014 10:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

On 01/12/14 02:02, Anant Patil wrote:
> On GitHub:https://github.com/anantpatil/heat-convergence-poc

I'm trying to review this code at the moment, and finding some stuff I don't 
understand:

https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916

This appears to loop through all of the resources *prior* to kicking off any 
actual updates to check if the resource will change. This is impossible to do 
in general, since a resource may obtain a property value from an attribute of 
another resource and there is no way to know whether an update to said other 
resource would cause a change in the attribute value.

In addition, no attempt to catch UpdateReplace is made. Although that looks 
like a simple fix, I'm now worried about the level to which this code has been 
tested.


I'm also trying to wrap my head around how resources are cleaned up in 
dependency order. If I understand correctly, you store in the ResourceGraph 
table the dependencies between various resource names in the current template 
(presumably there could also be some left around from previous templates too?). 
For each resource name there may be a number of rows in the Resource table, 
each with an incrementing version. 
As far as I can tell though, there's nowhere that the dependency graph for 
_previous_ templates is persisted? So if the dependency order changes in the 
template we have no way of knowing the correct order to clean up in any more? 
(There's not even a mechanism to associate a resource version with a particular 
template, which might be one avenue by which to recover the dependencies.)

I think this is an important case we need to be able to handle, so I added a 
scenario to my test framework to exercise it and discovered that my 
implementation was also buggy. Here's the fix: 
https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40


> It was difficult, for me personally, to completely understand Zane's 
> PoC and how it would lay the foundation for aforementioned design 
> goals. It would be very helpful to have Zane's understanding here. I 
> could understand that there are ideas like async message passing and 
> notifying the parent which we also subscribe to.

So I guess the thing to note is that there are essentially two parts to my Poc:
1) A simulation framework that takes what will be in the final implementation 
multiple tasks running in parallel in separate processes and talking to a 
database, and replaces it with an event loop that runs the tasks sequentially 
in a single process with an in-memory data store. 
I could have built a more realistic simulator using Celery or something, but I 
preferred this way as it offers deterministic tests.
2) A toy implementation of Heat on top of this framework.

The files map roughly to Heat something like this:

converge.engine   -> heat.engine.service
converge.stack-> heat.engine.stack
converge.resource -> heat.engine.resource
converge.template -> heat.engine.template
converge.dependencies -> actually is heat.engine.dependencies
converge.sync_point   -> no equivalent
converge.converger-> no equivalent (this is convergence "worker")
converge.reality  -> represents the actual OpenStack services

For convenience, I just use the @asynchronous decorator to turn an ordinary 
method call into a simulated message.

The concept is essentially as follows:
At the start of a stack update (creates and deletes are also just
updates) we create any new resources in the DB calculate the dependency graph 
for the update from the data in the DB and template. This graph is the same one 
used by updates in Heat currently, so it contains both the forward and reverse 
(cleanup) dependencie

Re: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support

2014-12-08 Thread Kashyap Chamarthy
On Fri, Dec 05, 2014 at 02:12:44PM -0600, Matt Riedemann wrote:
> 
> 
> On 12/5/2014 1:32 PM, Sean Dague wrote:
> >On 12/05/2014 01:50 PM, Matt Riedemann wrote:
> >>In Juno we effectively disabled live snapshots with libvirt due to bug
> >>1334398 [1] failing the gate about 25% of the time.
> >>
> >>I was going through the Juno release notes today and saw this as a known
> >>issue, which reminded me of it and was wondering if there is anything
> >>being done about it?

As Dan Berrangé noted, it's nearly impossible to reproduce this issue
independently outside of OpenStack Gating environment. I brought this up
at the recently concluded KVM Forum earlier this October. To debug this
any further, one of the QEMU block layer developers asked if we can get
QEMU instance running on Gate run under `gdb` (IIRC, danpb suggested
this too, previously) to get further tracing details.

> >>As I recall, it *works* but it wasn't working under the stress our
> >>check/gate system puts on that code path.

FWIW, I myself couldn't reproduce it independently via libvirt alone or
via QMP (QEMU Machine Protocol) commands.

Dan's workaround ("enable it permanently, except for   
under the gate") sounds sensible to me.

> >>One thing I'm thinking is, couldn't we make this an experimental config
> >>option and by default it's disabled but we could run it in the
> >>experimental queue, or people could use it without having to patch the
> >>code to remove the artificial minimum version constraint put in the code.
> >>
> >>Something like:
> >>
> >>if CONF.libvirt.live_snapshot_supported:
> >># do your thing
> >>
> >>[1] https://bugs.launchpad.net/nova/+bug/1334398
> >
> >So, it works. If you aren't booting / shutting down guests at exactly
> >the same time as snapshotting. 

Tried this exact case independently, and cannot reproduce, as stated by
Dan (and others on the bug) in this thread.

> >I believe cburgess said in IRC yesterday
> >he was going to take another look at it next week.
> >
> >I'm happy to put this into dansmith's pattented [workarounds] config
> >group (coming soon to fix the qemu-convert bug). But I don't think this
> >should be a normal libvirt option.
> >
> > -Sean
> >
> 
> Yeah the [workarounds] group

Is there any URL where I can read about this more?

> is what got me thinking about it too as a config option, otherwise I
> think the idea of an [experimental] config group has come up before as
> a place to put 'not tested, here be dragons' type stuff.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Fixing the console.log grows forever bug.

2014-12-08 Thread Daniel P. Berrange
On Sat, Dec 06, 2014 at 04:38:52PM +1100, Tony Breeds wrote:
> Hi All,
> In the most recent team meeting we briefly discussed: [1] where the
> console.log grows indefinitely, eventually causing guest stalls.  I mentioned
> that I was working on a spec to fix this issue.
> 
> My original plan was fairly similar to [2]  In that we'd switch libvirt/qemu 
> to
> using a unix domain socket and write a simple helper to read from that socket
> and write to disk.  That helper would close and reopen the on disk file upon
> receiving a HUP (so logrotate just works).   Life would be good. and we could
> all move on.
> 
> However I was encouraged to investigate fixing this in qemu, such that qemu
> could process the HUP and make life better for all.  This is certainly doable
> and I'm happy[3] to do this work.  I've floated the idea past qemu-devel and
> they seem okay with the idea.  My main concern is in lag and supporting
> qemu/libvirt that can't handle this option.

As mentioned in my reply on qemu-devel, I think the right long term solution
for this is to fix it in libvirt. We have a general security goal to remove
QEMU's ability to open any files whatsoever, instead having it receive all
host resources as pre-opened file descriptors from libvirt. So what we
anticipate is a new libvirt daemon for processing logs, virtlogd. Anywhere
where QEMU currently gets a file to log to ( devices, and its
stdout/stderr), it would instead be given a FD that's connected to virtlogd.
virtlogd would simply write the data out to file & would be able to close
& re-open files to integrate with logrotate.

> For the sake of discussion  I'll lay out my best guess right now on fixing 
> this
> in qemu.
> 
> qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix I'm
> proposing would be available in qemu 2.3.0 which I think will be available in
> June/July 2015.  So we'd be into 'L' development before this fix is available
> and possibly 'M' before the community distros (Fedora and  Ubuntu)[5] include
> and almost certainly longer for Enterprise distros.  Along with the qemu
> development I expect there to be some libvirt development as well but right 
> now
> I don't think that's critical to the feature or this discussion.
> 
> So if that timeline is approximately correct:
> 
> - Can we wait this long to fix the bug?  As opposed to having it squashed in 
> Kilo.
> - What do we do in nova for the next ~12 months while know there isn't a qemu 
> to fix this?
> - Then once there is a qemu that fixes the issue, do we just say 'thou must 
> use
>   qemu 2.3.0' or would nova still need to support old and new qemu's ?

FWIW, by comparison libvirt is on a monthly release schedule, so a fix done in
libvirt has potential to be available sooner, though obviously there's bigger
dev work to be done in libvirt for this.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support

2014-12-08 Thread Daniel P. Berrange
On Fri, Dec 05, 2014 at 12:50:37PM -0600, Matt Riedemann wrote:
> In Juno we effectively disabled live snapshots with libvirt due to bug
> 1334398 [1] failing the gate about 25% of the time.
> 
> I was going through the Juno release notes today and saw this as a known
> issue, which reminded me of it and was wondering if there is anything being
> done about it?
> 
> As I recall, it *works* but it wasn't working under the stress our
> check/gate system puts on that code path.

Yep, I've tried to reproduce the problem in countless different ways and
never succeeded, even when replicating the gate test VM config & setup
exactly. IOW it is highly load dependant edge case.

IMHO we did a disservice to users by disabling this. Based on my experiance
trying to reproduce it, is something that would work fine for end users 
times out of 1. I think we should just put a temporary hack into Nova
that only disables the code when running under the gate systems, leaving it
enabled for users.

> One thing I'm thinking is, couldn't we make this an experimental config
> option and by default it's disabled but we could run it in the experimental
> queue, or people could use it without having to patch the code to remove the
> artificial minimum version constraint put in the code.
> 
> Something like:
> 
> if CONF.libvirt.live_snapshot_supported:
># do your thing

I don't really think we need that. Just enable it permanently, except for
under the gate.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-08 Thread Daniel P. Berrange
On Sun, Dec 07, 2014 at 08:19:54PM +1100, Michael Still wrote:
> On Sun, Dec 7, 2014 at 7:03 PM, Gary Kotton  wrote:
> > On 12/6/14, 7:42 PM, "Jay Pipes"  wrote:
> 
> [snip]
> 
> >>-1 on pixelbeat, since he's been active in reviews on
> >>various things AFAICT in the last 60-90 days and seems to be still a
> >>considerate reviewer in various areas.
> >
> > I agree -1 for Padraig
> 
> I'm going to be honest and say I'm confused here.
> 
> We've always said we expect cores to maintain an average of two
> reviews per day. That's not new, nor a rule created by me. Padraig is
> a great guy, but has been working on other things -- he's done 60
> reviews in the last 60 days -- which is about half of what we expect
> from a core.

Even that limited 60 reviews is still having a notable positive
impact to the ability of Nova core to get things done.

> Are we talking about removing the two reviews a day requirement? If
> so, how do we balance that with the widespread complaints that core
> isn't keeping up with its workload? We could add more people to core,
> but there is also a maximum practical size to the group if we're going
> to keep everyone on the same page, especially when the less active
> cores don't generally turn up to our IRC meetings and are therefore
> more "expensive" to keep up to date.
> 
> How can we say we are doing our best to keep up with the incoming
> review workload if all reviewers aren't doing at least the minimum
> level of reviews?

How exactly is cutting more people from core helping us to keep up
with the incoming review workoad ? It just makes it worse.

The only way to mahorly help with that is to either get about 10-20
more people onto core which is unlikely, or to majorly split up the
project as I've suggested in the past, or something in between. eg
give the top 40 people in the review count list the ability to +2
things, leaving Nova core to just toggle the +A bit.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Intercycle release package versioning

2014-12-08 Thread isviridov

Hello Aleksei,

Thanks for rising it.

I'm ok with it and let me clarify the source code repo state for each to 
releases.



1:2014.2-0ubuntu1

tag: 2014.2
fixes in stable/juno


1:2014.2~rc2-0ubuntu1

tag: 2014.2.rc2
fixes in master


1:2014.2~b2-0ubuntu1

tag: 2014.2.b2
fixed in master


1:2014.2~b2.dev{MMDD}_{GIT_SHA1}-0ubuntu1

no tag
fixes in master

Any thoughts?

Thanks,
Ilya


05.12.2014 15:39, Aleksei Chuprin (CS):

Hello everyone,

Because MagnetoDB project uses more frequent releases than other OpenStack 
projects, i propose use following versioning strategy for MagnetoDB packages:

1:2014.2-0ubuntu1
1:2014.2~rc2-0ubuntu1
1:2014.2~rc1-0ubuntu1
1:2014.2~b2-0ubuntu1
1:2014.2~b2.dev{MMDD}_{GIT_SHA1}-0ubuntu1
1:2014.2~b2.dev{MMDD}_{GIT_SHA1}-0ubuntu1
1:2014.2~b1-0ubuntu1

What do you think about this?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-08 Thread Daniel P. Berrange
On Sat, Dec 06, 2014 at 07:56:21AM +1100, Michael Still wrote:
> I used Russell's 60 day stats in making this decision. I can't find a
> documented historical precedent on what period the stats should be
> generated over, however 60 days seems entirely reasonable to me.
> 
> 2014-12-05 15:41:11.212927
> 
> Reviews for the last 60 days in nova
> ** -- nova-core team member
> +-+---++
> |   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %
> | Disagreements* |
> +-+---++
> | berrange ** | 669   13 134   1 521 19478.0%
> |   47 (  7.0%)  |
> |   jogo **   | 431   38 161   2 230 11753.8%
> |   19 (  4.4%)  |
> |  oomichi ** | 3091 106   4 198  5865.4%
> |3 (  1.0%)  |
> |   danms **  | 293   34 133  15 111  4343.0%
> |   12 (  4.1%)  |
> | jaypipes ** | 290   10 108  14 158  4259.3%
> |   15 (  5.2%)  |
> | ndipanov ** | 192   10  78   6  98  2454.2%
> |   24 ( 12.5%)  |
> |  klmitch ** | 1901  22   0 167  1287.9%
> |   21 ( 11.1%)  |
> |  cyeoh-0 ** | 1840  70  10 104  4162.0%
> |9 (  4.9%)  |
> |  mriedem ** | 1733  86   8  76  3148.6%
> |8 (  4.6%)  |
> |johngarbutt **   | 164   19  79   6  60  2440.2%
> |7 (  4.3%)  |
> | cerberus ** | 1510   9  40 102  3894.0%
> |7 (  4.6%)  |
> |mikalstill **| 1452   8   1 134  4893.1%
> |3 (  2.1%)  |
> |  alaski **  | 1040   7   6  91  5493.3%
> |5 (  4.8%)  |
> |  sdague **  |  986  21   2  69  4072.4%
> |4 (  4.1%)  |
> | russellb ** |  861  10   0  75  2987.2%
> |5 (  5.8%)  |
> |   p-draigbrady **   |  600  12   1  47  1080.0%
> |4 (  6.7%)  |
> | belliott ** |  320   8   1  23  1575.0%
> |4 ( 12.5%)  |
> |vishvananda **   |   80   2   0   6   175.0%
> |2 ( 25.0%)  |
> |dan-prince **|   70   0   0   7   3   100.0%
> |4 ( 57.1%)  |
> | cbehrens ** |   40   2   0   2   050.0%
> |1 ( 25.0%)  |
> 
> The previously held standard for core reviewer activity has been an
> _average_ of two reviews per day, which is why I used the 60 days
> stats (to eliminate vacations and so forth). It should be noted that
> the top ten or so reviewers are doing at lot more than that.
> 
> All of the reviewers I dropped are valued members of the team, and I
> am sad to see all of them go. However, it is important that reviewers
> remain active.

Given that the Nova core is horrifically overworked & understaffed,
I really think this being really counterproductive to the project
needs to do this. It is just making the bad situation we're in
even worse :-(

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Nailgun] Web framework

2014-12-08 Thread Nikolay Markov
> Yes, and it's been 4 days since last message in this thread and no
> objections, so it seems
> that Pecan in now our framework-of-choice for Nailgun and future
> apps/projects.

We still need some research to do about technical issues and how easy
we can move to Pecan. Thanks to Ryan, we now have multiple links to
solutions and docs on discussed issues. I guess we'll dedicate some
engineer(s) responsible for doing such a research and then make all
our decisions on subject.

On Mon, Dec 8, 2014 at 11:07 AM, Sebastian Kalinowski
 wrote:
> 2014-12-04 14:01 GMT+01:00 Igor Kalnitsky :
>>
>> Ok, guys,
>>
>> It became obvious that most of us either vote for Pecan or abstain from
>> voting.
>
>
> Yes, and it's been 4 days since last message in this thread and no
> objections, so it seems
> that Pecan in now our framework-of-choice for Nailgun and future
> apps/projects.
>
>>
>>
>> So I propose to stop fighting this battle (Flask vs Pecan) and start
>> thinking about moving to Pecan. You know, there are many questions
>> that need to be discussed (such as 'should we change API version' or
>> 'should be it done iteratively or as one patchset').
>
>
> IMHO small, iterative changes are rather obvious.
> For other questions maybe we need (draft of ) a blueprint and a separate
> mail thread?
>
>>
>>
>> - Igor
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Nick Markov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-08 Thread Nikola Đipanov
On 12/07/2014 08:52 PM, Michael Still wrote:
> 
> You know what makes me really sad? No one has suggested that perhaps
> Padraig could just pick up his review rate a little. I've repeatedly
> said we can re-add reviewers if that happens.
> 

This is of course not true - everybody *but* the people on this thread
agree with it (otherwise they would have responded) since re-adding
cores is a well know process, so him picking it up and getting re-added
is not what is discussed here.

What we (or at least I and afaict Jay) are saying is - even though his
numbers are low - we still think he should be core because the
thoughtfulness of his reviews matters more to us than the fact that he
is otherwise engaged besides Nova.

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git] Rebase failed

2014-12-08 Thread Eduard Matei
Indeed, it started again the review process.
Thanks for your input.

Eduard

On Mon, Dec 8, 2014 at 10:56 AM, Gary Kotton  wrote:

>  Hi,
> The whole review process starts again from scratch :). You can feel free
> to reach out the guys who originally reviewed and then go from there. Good
> luck!
> Thanks
> Gary
>
>   From: Eduard Matei 
> Reply-To: OpenStack List 
> Date: Monday, December 8, 2014 at 10:25 AM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [git] Rebase failed
>
>   Thanks guys,
> It seems one of the files i was trying to merge was actually removed by
> remote (?).
> So i removed it locally (git rm ... ) and now rebase worked, but now i had
> an extra patchset.
> Will this be merged automatically or does it need again reviewers ?
>
>  Thanks,
> Eduard
>
> On Mon, Dec 8, 2014 at 10:18 AM, Sylvain Bauza  wrote:
>
>>
>> Le 08/12/2014 09:13, Eli Qiao a écrit :
>>
>>
>> 在 2014年12月08日 16:05, Eduard Matei 写道:
>>
>> Hi,
>> My review got approved and it was ready to merge but automatic merge
>> failed.
>> I tried to rebase manually but it still fails.
>>
>>  I'm not very familiar with git, can someone give a hand?
>>
>>   git rebase -i master
>> error: could not apply db4a3bb... New Cinder volume driver for
>> openvstorage.
>>
>>   hi Eduard
>> yeah, you need to manually rebase the changes.
>> git status check which files should be changes (file name in red) and
>> find <<< in
>> the file , modify it. after doing that , git add , all
>> conflict files need to
>> be done, then git reabase --continue.
>>
>>
>>
>> Or you can just raise the magical wand and invoke "git mergetool" with
>> your favourite editor (vimdiff, kdiff3 or whatever else) and leave this
>> tool show you the difference in between the base branch (when the code
>> branched), the local branch (the master branch) and the remote branch (your
>> changes)
>>
>> My USD0.02
>>
>> -Sylvain
>>
>>
>>   When you have resolved this problem, run "git rebase --continue".
>> If you prefer to skip this patch, run "git rebase --skip" instead.
>> To check out the original branch and stop rebasing, run "git rebase
>> --abort".
>> Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder
>> volume driver for openvstorage.
>>  git rebase --continue
>> etc/cinder/cinder.conf.sample: needs merge
>> You must edit all merge conflicts and then
>> mark them as resolved using git add
>>
>>  Review is: https://review.openstack.org/#/c/130733/
>>
>>  Thanks,
>> Eduard
>>
>>  --
>>
>> *Eduard Biceri Matei, Senior Software Developer*
>> www.cloudfounders.com 
>> 
>>  | eduard.ma...@cloudfounders.com
>>
>>
>> *CloudFounders, The Private Cloud Software Company*
>>
>> Disclaimer:
>> This email and any files transmitted with it are confidential and intended 
>> solely for the use of the individual or entity to whom they are addressed.If 
>> you are not the named addressee or an employee or agent responsible for 
>> delivering this message to the named addressee, you are hereby notified that 
>> you are not authorized to read, print, retain, copy or disseminate this 
>> message or any part of it. If you have received this email in error we 
>> request you to notify us by reply e-mail and to delete all electronic files 
>> of the message. If you are not the intended recipient you are notified that 
>> disclosing, copying, distributing or taking any action in reliance on the 
>> contents of this information is strictly prohibited.
>> E-mail transmission cannot be guaranteed to be secure or error free as 
>> information could be intercepted, corrupted, lost, destroyed, arrive late or 
>> incomplete, or contain viruses. The sender therefore does not accept 
>> liability for any errors or omissions in the content of this message, and 
>> shall have no liability for any loss or damage suffered by the user, which 
>> arise as a result of e-mail transmission.
>>
>>
>>
>> ___
>> OpenStack-dev mailing 
>> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> --
>> Thanks,
>> Eli (Li Yong) Qiao
>>
>>
>>
>> ___
>> OpenStack-dev mailing 
>> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>  --
>
> *Eduard Biceri Matei, Senior Software Developer*
> www.cloudfounders.com 
> 

Re: [openstack-dev] [git] Rebase failed

2014-12-08 Thread Jan Kundrát

On Monday, 8 December 2014 09:57:15 CEST, Jan Kundrát wrote:

On Monday, 8 December 2014 09:25:43 CEST, Eduard Matei wrote:

So i removed it locally (git rm ... ) and now rebase worked, but now i had
an extra patchset.
Will this be merged automatically or does it need again reviewers ?


Make sure you have a single commit which includes both the `git 
rm` and your original changes, and that this commit still has 
the same Change-Id line as the one you uploaded originally.


And on a re-read, it seems that you already know that very well, so in that 
case, sorry for noise.


/me grabs that coffee again.

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git] Rebase failed

2014-12-08 Thread Gary Kotton
Hi,
The whole review process starts again from scratch :). You can feel free to 
reach out the guys who originally reviewed and then go from there. Good luck!
Thanks
Gary

From: Eduard Matei 
mailto:eduard.ma...@cloudfounders.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, December 8, 2014 at 10:25 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [git] Rebase failed

Thanks guys,
It seems one of the files i was trying to merge was actually removed by remote 
(?).
So i removed it locally (git rm ... ) and now rebase worked, but now i had an 
extra patchset.
Will this be merged automatically or does it need again reviewers ?

Thanks,
Eduard

On Mon, Dec 8, 2014 at 10:18 AM, Sylvain Bauza 
mailto:sba...@redhat.com>> wrote:

Le 08/12/2014 09:13, Eli Qiao a écrit :

在 2014年12月08日 16:05, Eduard Matei 写道:
Hi,
My review got approved and it was ready to merge but automatic merge failed.
I tried to rebase manually but it still fails.

I'm not very familiar with git, can someone give a hand?

 git rebase -i master
error: could not apply db4a3bb... New Cinder volume driver for openvstorage.

hi Eduard
yeah, you need to manually rebase the changes.
git status check which files should be changes (file name in red) and find <<< 
in
the file , modify it. after doing that , git add , all conflict 
files need to
be done, then git reabase --continue.


Or you can just raise the magical wand and invoke "git mergetool" with your 
favourite editor (vimdiff, kdiff3 or whatever else) and leave this tool show 
you the difference in between the base branch (when the code branched), the 
local branch (the master branch) and the remote branch (your changes)

My USD0.02

-Sylvain


When you have resolved this problem, run "git rebase --continue".
If you prefer to skip this patch, run "git rebase --skip" instead.
To check out the original branch and stop rebasing, run "git rebase --abort".
Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder volume 
driver for openvstorage.
 git rebase --continue
etc/cinder/cinder.conf.sample: needs merge
You must edit all merge conflicts and then
mark them as resolved using git add

Review is: https://review.openstack.org/#/c/130733/

Thanks,
Eduard

--

Eduard Biceri Matei, Senior Software Developer
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com


CloudFounders, The Private Cloud Software Company

Disclaimer:
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed.If 
you are not the named addressee or an employee or agent responsible for 
delivering this message to the named addressee, you are hereby notified that 
you are not authorized to read, print, retain, copy or disseminate this message 
or any part of it. If you have received this email in error we request you to 
notify us by reply e-mail and to delete all electronic files of the message. If 
you are not the intended recipient you are notified that disclosing, copying, 
distributing or taking any action in reliance on the contents of this 
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any errors or omissions in the content of this message, and shall have no 
liability for any loss or damage suffered by the user, which arise as a result 
of e-mail transmission.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,
Eli (Li Yong) Qiao



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Eduard Biceri Matei, Senior Software Developer
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com<

Re: [openstack-dev] [git] Rebase failed

2014-12-08 Thread Jan Kundrát

On Monday, 8 December 2014 09:25:43 CEST, Eduard Matei wrote:

So i removed it locally (git rm ... ) and now rebase worked, but now i had
an extra patchset.
Will this be merged automatically or does it need again reviewers ?


Make sure you have a single commit which includes both the `git rm` and 
your original changes, and that this commit still has the same Change-Id 
line as the one you uploaded originally.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Fixing the console.log grows forever bug.

2014-12-08 Thread Thierry Carrez
Tony Breeds wrote:
> [...]
> So if that timeline is approximately correct:
> 
> - Can we wait this long to fix the bug?  As opposed to having it squashed in 
> Kilo.
> - What do we do in nova for the next ~12 months while know there isn't a qemu 
> to fix this?
> - Then once there is a qemu that fixes the issue, do we just say 'thou must 
> use
>   qemu 2.3.0' or would nova still need to support old and new qemu's ?

Fixing it in qemu looks like the right way to fix this issue. If it was
simple to fix, it would have been fixed already: this is one of our
oldest bugs with security impact. So I'd say yes, this should be fixed
in qemu, even if that takes a long time to propagate.

If someone finds an interesting way to work around this issue in Nova,
then by all means, add the workaround to Kilo and deprecate it once we
can assume everyone moved to newer qemu. But given it's been 3 years
this bug has been around, I wouldn't hold my breath.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git] Rebase failed

2014-12-08 Thread Eduard Matei
Thanks guys,
It seems one of the files i was trying to merge was actually removed by
remote (?).
So i removed it locally (git rm ... ) and now rebase worked, but now i had
an extra patchset.
Will this be merged automatically or does it need again reviewers ?

Thanks,
Eduard

On Mon, Dec 8, 2014 at 10:18 AM, Sylvain Bauza  wrote:

>
> Le 08/12/2014 09:13, Eli Qiao a écrit :
>
>
> 在 2014年12月08日 16:05, Eduard Matei 写道:
>
> Hi,
> My review got approved and it was ready to merge but automatic merge
> failed.
> I tried to rebase manually but it still fails.
>
>  I'm not very familiar with git, can someone give a hand?
>
>   git rebase -i master
> error: could not apply db4a3bb... New Cinder volume driver for
> openvstorage.
>
>   hi Eduard
> yeah, you need to manually rebase the changes.
> git status check which files should be changes (file name in red) and find
> <<< in
> the file , modify it. after doing that , git add , all conflict
> files need to
> be done, then git reabase --continue.
>
>
>
> Or you can just raise the magical wand and invoke "git mergetool" with
> your favourite editor (vimdiff, kdiff3 or whatever else) and leave this
> tool show you the difference in between the base branch (when the code
> branched), the local branch (the master branch) and the remote branch (your
> changes)
>
> My USD0.02
>
> -Sylvain
>
>
>   When you have resolved this problem, run "git rebase --continue".
> If you prefer to skip this patch, run "git rebase --skip" instead.
> To check out the original branch and stop rebasing, run "git rebase
> --abort".
> Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder
> volume driver for openvstorage.
>  git rebase --continue
> etc/cinder/cinder.conf.sample: needs merge
> You must edit all merge conflicts and then
> mark them as resolved using git add
>
>  Review is: https://review.openstack.org/#/c/130733/
>
>  Thanks,
> Eduard
>
>  --
>
> *Eduard Biceri Matei, Senior Software Developer*
> www.cloudfounders.com | eduard.ma...@cloudfounders.com
>
>
> *CloudFounders, The Private Cloud Software Company*
>
> Disclaimer:
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed.If 
> you are not the named addressee or an employee or agent responsible for 
> delivering this message to the named addressee, you are hereby notified that 
> you are not authorized to read, print, retain, copy or disseminate this 
> message or any part of it. If you have received this email in error we 
> request you to notify us by reply e-mail and to delete all electronic files 
> of the message. If you are not the intended recipient you are notified that 
> disclosing, copying, distributing or taking any action in reliance on the 
> contents of this information is strictly prohibited.
> E-mail transmission cannot be guaranteed to be secure or error free as 
> information could be intercepted, corrupted, lost, destroyed, arrive late or 
> incomplete, or contain viruses. The sender therefore does not accept 
> liability for any errors or omissions in the content of this message, and 
> shall have no liability for any loss or damage suffered by the user, which 
> arise as a result of e-mail transmission.
>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Thanks,
> Eli (Li Yong) Qiao
>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
acc

Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-08 Thread Gary Kotton
Hi,
I would expect that if a core does not understand a piece of code then he/she 
would not approve it they can always give a +1 and be honest that it is not 
part of the code base that they understand. That is legitimate in such a 
complex and large project.
We all make mistakes, it is the only way that we can learn and grow. Limiting 
the size of the core team is limiting the growth, quality and pulse of the 
project.
Thanks
Gary

From: Sylvain Bauza mailto:sba...@redhat.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, December 8, 2014 at 10:15 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Nova] Spring cleaning nova-core


Le 07/12/2014 23:27, Dan Smith a écrit :

The argument boils down to there is a communications cost to adding
someone to core, and therefore there is a maximum size before the
communications burden becomes to great.


I'm definitely of the mindset that the core team is something that has a
maximum effective size. Nova is complicated and always changing; keeping
everyone on top of current development themes is difficult. Just last
week, we merged a patch that bumped the version of an RPC API without
making the manager tolerant of the previous version. That's a theme
we've had for a while, and yet it was still acked by two cores.

A major complaint I hear a lot is "one core told me to do X and then
another core told me to do !X". Obviously this will always happen, but I
do think that the larger and more disconnected the core team becomes,
the more often this will occur. If all the cores reviewed at the rate of
the top five and we still had a throughput problem, then evaluating the
optimal size would be a thing we'd need to do. However, even at the
current size, we have (IMHO) communication problems, mostly uninvolved
cores, and patches going in that break versioning rules. Making the team
arbitrarily larger doesn't seem like a good idea to me.


As a non-core, I can't speak about how cores communicate within the team. That 
said, I can just say it is sometimes very hard to review all the codepaths that 
Nova has, in particular when some new rules are coming in (for example, API 
microversions, online data migrations or reducing the tech debt in the 
Scheduler).

As a consequence, I can understand that some people can do mistakes when 
reviewing a specific change because they are not experts or because they missed 
some important non-written good practice.
That said, I think this situatiion doesn't necessarly mean that it can't be 
improved by simple rules.

For example, the revert policy is a good thing : errors can happen, and 
admitting that it's normal that a revert can happen in the next couple of days 
seems fine by me. Also, why not considering that some cores are more experts 
than others in a single codepath ? I mean, we all know who to address if we 
have some specific questions about a change (like impacting virt drivers, 
objects, or API). So, why a change wouldn't be at least +1'd by these expert 
cores before *approving* it ?

As Nova is growing, I'm not sure if it's good to cap the team. IMHO, mistakes 
are human, that shouldn't be the reason why the team is not growing, but rather 
how we can make sure that disagreements wouldn't be a problem.

(Now going back in my cavern)
-Sylvain



I will say that I am disappointed that we have cores who don't
regularly attend our IRC meetings. That makes the communication much
more complicated.


Agreed. We alternate the meeting times such that this shouldn't be hard,
IMHO.

--Dan





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git] Rebase failed

2014-12-08 Thread Sylvain Bauza


Le 08/12/2014 09:13, Eli Qiao a écrit :


在 2014年12月08日 16:05, Eduard Matei 写道:

Hi,
My review got approved and it was ready to merge but automatic merge 
failed.

I tried to rebase manually but it still fails.

I'm not very familiar with git, can someone give a hand?

 git rebase -i master
error: could not apply db4a3bb... New Cinder volume driver for 
openvstorage.



hi Eduard
yeah, you need to manually rebase the changes.
git status check which files should be changes (file name in red) and 
find <<< in
the file , modify it. after doing that , git add , all 
conflict files need to

be done, then git reabase --continue.



Or you can just raise the magical wand and invoke "git mergetool" with 
your favourite editor (vimdiff, kdiff3 or whatever else) and leave this 
tool show you the difference in between the base branch (when the code 
branched), the local branch (the master branch) and the remote branch 
(your changes)


My USD0.02

-Sylvain


When you have resolved this problem, run "git rebase --continue".
If you prefer to skip this patch, run "git rebase --skip" instead.
To check out the original branch and stop rebasing, run "git rebase 
--abort".
Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New 
Cinder volume driver for openvstorage.

 git rebase --continue
etc/cinder/cinder.conf.sample: needs merge
You must edit all merge conflicts and then
mark them as resolved using git add

Review is: https://review.openstack.org/#/c/130733/

Thanks,
Eduard

--
*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com  | 
eduard.ma...@cloudfounders.com 


*CloudFounders, The Private Cloud Software Company*
Disclaimer:
This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they 
are addressed.If you are not the named addressee or an employee or 
agent responsible for delivering this message to the named addressee, 
you are hereby notified that you are not authorized to read, print, 
retain, copy or disseminate this message or any part of it. If you 
have received this email in error we request you to notify us by 
reply e-mail and to delete all electronic files of the message. If 
you are not the intended recipient you are notified that disclosing, 
copying, distributing or taking any action in reliance on the 
contents of this information is strictly prohibited. E-mail 
transmission cannot be guaranteed to be secure or error free as 
information could be intercepted, corrupted, lost, destroyed, arrive 
late or incomplete, or contain viruses. The sender therefore does not 
accept liability for any errors or omissions in the content of this 
message, and shall have no liability for any loss or damage suffered 
by the user, which arise as a result of e-mail transmission.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,
Eli (Li Yong) Qiao


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git] Rebase failed

2014-12-08 Thread Gary Kotton
Hi,
I do the following:

  1.  Go to master branch. git checkout master
  2.  Get latest master code: git pull
  3.  Checkout you code: git checkout yourbranchname
  4.  Rebase this on the laster master: git rebase -i master
  5.  Resolve conflicts: git status (this will show conflicts)
  6.  Resolve them - look in the file and search for HEAD
  7.  Fix and then do git add filename-that-was-updated
  8.  Continue: git rebase -continue
  9.  Commit ...

Hope that helps
Thanks
Gary

From: Eduard Matei 
mailto:eduard.ma...@cloudfounders.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, December 8, 2014 at 10:05 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [git] Rebase failed

Hi,
My review got approved and it was ready to merge but automatic merge failed.
I tried to rebase manually but it still fails.

I'm not very familiar with git, can someone give a hand?

 git rebase -i master
error: could not apply db4a3bb... New Cinder volume driver for openvstorage.

When you have resolved this problem, run "git rebase --continue".
If you prefer to skip this patch, run "git rebase --skip" instead.
To check out the original branch and stop rebasing, run "git rebase --abort".
Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder volume 
driver for openvstorage.
 git rebase --continue
etc/cinder/cinder.conf.sample: needs merge
You must edit all merge conflicts and then
mark them as resolved using git add

Review is: https://review.openstack.org/#/c/130733/

Thanks,
Eduard

--

Eduard Biceri Matei, Senior Software Developer
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



CloudFounders, The Private Cloud Software Company

Disclaimer:
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed.
If you are not the named addressee or an employee or agent responsible for 
delivering this message to the named addressee, you are hereby notified that 
you are not authorized to read, print, retain, copy or disseminate this message 
or any part of it. If you have received this email in error we request you to 
notify us by reply e-mail and to delete all electronic files of the message. If 
you are not the intended recipient you are notified that disclosing, copying, 
distributing or taking any action in reliance on the contents of this 
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any errors or omissions in the content of this message, and shall have no 
liability for any loss or damage suffered by the user, which arise as a result 
of e-mail transmission.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >