Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-14 Thread sxmatch


于 2014-03-14 11:59, Zhangleiqiang (Trump) 写道:

From: sxmatch [mailto:sxmatch1...@gmail.com]
Sent: Friday, March 14, 2014 11:08 AM
To: Zhangleiqiang (Trump)
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
protection


于 2014-03-11 19:24, Zhangleiqiang 写道:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 5:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang
zhangleiqi...@huawei.com
wrote:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 4:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
zhangleiqi...@huawei.com wrote:

Hi all,



Besides the soft-delete state for volumes, I think there is need
for introducing another fake delete state for volumes which have

snapshot.


Current Openstack refuses the delete request for volumes which
have snapshot. However, we will have no method to limit users to
only use the specific snapshot other than the original volume ,
because the original volume is always visible for the users.



So I think we can permit users to delete volumes which have
snapshots, and mark the volume as fake delete state. When all of
the snapshots of the volume have already deleted, the original
volume will be removed automatically.


Can you describe the actual use case for this?  I not sure I follow
why operator would like to limit the owner of the volume to only
use specific version of snapshot.  It sounds like you are adding
another layer.  If that's the case, the problem should be solved at
upper layer

instead of Cinder.

For example, one tenant's volume quota is five, and has 5 volumes
and 1

snapshot already. If the data in base volume of the snapshot is
corrupted, the user will need to create a new volume from the
snapshot, but this operation will be failed because there are already
5 volumes, and the original volume cannot be deleted, too.
Hmm, how likely is it the snapshot is still sane when the base volume
is corrupted?

If the snapshot of volume is COW, then the snapshot will be still sane when

the base volume is corrupted.
So, if we delete volume really, just keep snapshot alive, is it possible? User
don't want to use this volume at now, he can take a snapshot and then delete
volume.


If we delete volume really, the COW snapshot cannot be used. But if the data in 
base volume is corrupt, we can use the snapshot normally or create an available 
volume from the snapshot.

The COW means copy-on-write, when the data-block in base volume is being to 
written, this block will first copy to the snapshot.

Hope it helps.

Thanks for your explain,it's very helpful.

If he want it again, can create volume from this snapshot.

Any ideas?

Even if this case is possible, I don't see the 'fake delete' proposal
is the right way to solve the problem.  IMO, it simply violates what
quota system is designed for and complicates quota metrics
calculation (there would be actual quota which is only visible to
admin/operator and an end-user facing quota).  Why not contact
operator to bump the upper limit of the volume quota instead?

I had some misunderstanding on Cinder's snapshot.
Fake delete is common if there is chained snapshot or snapshot tree

mechanism. However in cinder, only volume can make snapshot but snapshot
cannot make snapshot again.

I agree with your bump upper limit method.

Thanks for your explanation.





Any thoughts? Welcome any advices.







--

zhangleiqiang



Best Regards



From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM


To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection







On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt
j...@johngarbutt.com

wrote:

On 6 March 2014 08:50, zhangyu (AI) zhangy...@huawei.com wrote:

It seems to be an interesting idea. In fact, a China-based public
IaaS, QingCloud, has provided a similar feature to their virtual
servers. Within 2 hours after a virtual server is deleted, the
server owner can decide whether or not to cancel this deletion
and re-cycle that deleted virtual server.

People make mistakes, while such a feature helps in urgent cases.
Any idea here?

Nova has soft_delete and restore for servers. That sounds similar?

John



-Original Message-
From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
Sent: Thursday, March 06, 2014 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

Hi all,

Current openstack provide the delete volume function to the user.
But 

Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-14 Thread Renat Akhmerov
Folks,

Mistral and TaskFlow are significantly different technologies. With different 
set of capabilities, with different target audience.

We may not be doing enough to clarify all the differences, I admit that. The 
challenge here is that people tend to judge having minimal amount of 
information about both things. As always, devil in the details. Stan is 100% 
right, “seems” is not an appropriate word here. Java seems to be similar to C++ 
at the first glance for those who have little or no knowledge about them.

To be more consistent I won’t be providing all the general considerations that 
I’ve been using so far (in etherpads, MLs, in personal discussions), it doesn’t 
seem to be working well, at least not with everyone. So to make it better, like 
I said in that different thread: we’re evaluating TaskFlow now and will share 
the results. Basically, it’s what Boris said about what could and could not be 
implemented in TaskFlow. But since the very beginning of the project I never 
abandoned the idea of using TaskFlow some day when it’s possible. 

So, again: Joshua, we hear you, we’re working in that direction.

 
 I'm reminded of
 http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
 k/2 where it seemed like we were doing much better collaboration, what has
 happened to break this continuity?

Not sure why you think something is broken. We just want to finish the pilot 
with all the ‘must’ things working in it. This is a plan. Then we can revisit 
and change absolutely everything. Remember, to the great extent this is 
research. Joshua, this is what we talked about and agreed on many times. I know 
you might be anxious about that given the fact it’s taking more time than 
planned but our vision of the project has drastically evolved and gone far far 
beyond the initial Convection proposal. So the initial idea of POC is no longer 
relevant. Even though we finished the first version in December, we realized it 
wasn’t something that should have been shared with the community since it 
lacked some essential things.


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-14 Thread Joshua Harlow
Thanks Renat,

I'll keep waiting, and hoping that we can figure this out for everyone's 
benefit. Because in the end we are all much stronger working together and much 
weaker when not.

Sent from my really tiny device...

On Mar 13, 2014, at 11:41 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:

Folks,

Mistral and TaskFlow are significantly different technologies. With different 
set of capabilities, with different target audience.

We may not be doing enough to clarify all the differences, I admit that. The 
challenge here is that people tend to judge having minimal amount of 
information about both things. As always, devil in the details. Stan is 100% 
right, “seems” is not an appropriate word here. Java seems to be similar to C++ 
at the first glance for those who have little or no knowledge about them.

To be more consistent I won’t be providing all the general considerations that 
I’ve been using so far (in etherpads, MLs, in personal discussions), it doesn’t 
seem to be working well, at least not with everyone. So to make it better, like 
I said in that different thread: we’re evaluating TaskFlow now and will share 
the results. Basically, it’s what Boris said about what could and could not be 
implemented in TaskFlow. But since the very beginning of the project I never 
abandoned the idea of using TaskFlow some day when it’s possible.

So, again: Joshua, we hear you, we’re working in that direction.


I'm reminded of
http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
k/2http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-track/2
 where it seemed like we were doing much better collaboration, what has
happened to break this continuity?

Not sure why you think something is broken. We just want to finish the pilot 
with all the ‘must’ things working in it. This is a plan. Then we can revisit 
and change absolutely everything. Remember, to the great extent this is 
research. Joshua, this is what we talked about and agreed on many times. I know 
you might be anxious about that given the fact it’s taking more time than 
planned but our vision of the project has drastically evolved and gone far far 
beyond the initial Convection proposal. So the initial idea of POC is no longer 
relevant. Even though we finished the first version in December, we realized it 
wasn’t something that should have been shared with the community since it 
lacked some essential things.


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-14 Thread Radomir Dopieralski
Hello,

I also think that this thread is going in the wrong direction, but I
don't think the direction Boris wants is the correct one either. Frankly
I'm a little surprised that nobody mentioned another advantage that soft
delete gives us, the one that I think it was actually used for originally.

You see, soft delete is an optimization. It's there to make the system
work faster as a whole, have less code and be simpler to maintain and debug.

How does it do it, when, as clearly shown in the first post in this
thread, it makes the queries slower, requires additional indices in the
database and more logic in the queries? The answer is, by doing more
with those queries, by making you write less code, execute fewer queries
to the databases and avoid duplicating the same data in multiple places.

OpenStack is a big, distributed system of multiple databases that
sometimes rely on each other and cross-reference their records. It's not
uncommon to have some long-running operation started, that uses some
data, and then, in the middle of its execution, have that data deleted.
With soft delete, that's not a problem -- the operation can continue
safely and proceed as scheduled, with the data it was started with in
the first place -- it still has access to the deleted records as if
nothing happened. You simply won't be able to schedule another operation
like that with the same data, because it has been soft-deleted and won't
pass the validation at the beginning (or even won't appear in the UI or
CLI). This solves a lot of race conditions, error handling, additional
checks to make sure the record still exists, etc.

Without soft delete, you need to write custom code every time to handle
the case of a record being deleted mid-operation, including all the
possible combinations of which record and when. Or you need to copy all
the relevant data in advance over to whatever is executing that
operation. This cannot be abstracted away entirely (although tools like
TaskFlow help), as this is specific to the case you are handling. And
it's not easy to find all the places where you can have a race condition
like that -- especially when you are modifying existing code that has
been relying on soft delete before. You can have bugs undetected for
years, that only appear in production, on very large deployments, and
are impossible to reproduce reliably.

There are more similar cases like that, including cascading deletes and
more advanced stuff, but I think this single case already shows that
the advantages of soft delete out-weight its disadvantages.

On 13/03/14 19:52, Boris Pavlovic wrote:
 Hi all, 
 
 
 I would like to fix direction of this thread. Cause it is going in wrong
 direction. 
 
 To assume:
 1) Yes restoring already deleted recourses could be useful. 
 2) Current approach with soft deletion is broken by design and we should
 get rid of them. 
 
 More about why I think that it is broken: 
 1) When you are restoring some resource you should restore N records
 from N tables (e.g. VM)
 2) Restoring sometimes means not only restoring DB records. 
 3) Not all resources should be restorable (e.g. why I need to restore
 fixed_ip? or key-pairs?)
 
 
 So what we should think about is:
 1) How to implement restoring functionally in common way (e.g. framework
 that will be in oslo) 
 2) Split of work of getting rid of soft deletion in steps (that I
 already mention):
 a) remove soft deletion from places where we are not using it
 b) replace internal code where we are using soft deletion to that framework 
 c) replace API stuff using ceilometer (for logs) or this framework (for
 restorable stuff)
 
 
 To put in a nutshell: Restoring Delete resources / Delayed Deletion !=
 Soft deletion. 
 
 
 Best regards,
 Boris Pavlovic 
 
 
 
 On Thu, Mar 13, 2014 at 9:21 PM, Mike Wilson geekinu...@gmail.com
 mailto:geekinu...@gmail.com wrote:
 
 For some guests we use the LVM imagebackend and there are times when
 the guest is deleted on accident. Humans, being what they are, don't
 back up their files and don't take care of important data, so it is
 not uncommon to use lvrestore and undelete an instance so that
 people can get their data. Of course, this is not always possible if
 the data has been subsequently overwritten. But it is common enough
 that I imagine most of our operators are familiar with how to do it.
 So I guess my saying that we do it on a regular basis is not quite
 accurate. Probably would be better to say that it is not uncommon to
 do this, but definitely not a daily task or something of that ilk.
 
 I have personally undeleted an instance a few times after
 accidental deletion also. I can't remember the specifics, but I do
 remember doing it :-).
 
 -Mike
 
 
 On Tue, Mar 11, 2014 at 12:46 PM, Johannes Erdfelt
 johan...@erdfelt.com mailto:johan...@erdfelt.com wrote:
 
 On Tue, Mar 11, 2014, Mike Wilson geekinu...@gmail.com
 

Re: [openstack-dev] [oslo.messaging] mongodb notification driver

2014-03-14 Thread Hiroyuki Eguchi
Thank you for the reply.

I understand notification driver should be set of essential drivers.

I would like to consider using stacktach.


Thanks
--hiroyuki


-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] 
Sent: Thursday, March 13, 2014 2:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo.messaging] mongodb notification driver

You may want to consider StackTach for troubleshooting (that's what it was 
initially created for)

https://github.com/rackerlabs/stacktach

It will consume and record the events as well as give you a gui and cmdline 
tools for tracing calls by server, request_id, event type, etc. 

Ping me if you have any issues getting it going.

Cheers
-S


From: Hiroyuki Eguchi [h-egu...@az.jp.nec.com]
Sent: Tuesday, March 11, 2014 11:09 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [oslo.messaging] mongodb notification driver

I'm envisioning a mongodb notification driver.

Currently, For troubleshooting, I'm using a log driver of notification, and 
sent notification log to rsyslog server, and store log in database using 
rsyslog-mysql package.

I would like to make it more simple, So I came up with this feature.

Ceilometer can manage notifications using mongodb, but Ceilometer should have 
the role of Metering, not Troubleshooting.

If you have any comments or suggestion, please let me know.
And please let me know if there's any discussion about this.

Thanks.
--hiroyuki

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Moving swift3 to stackforge (was: Re: [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka)

2014-03-14 Thread Chmouel Boudjnah
On Thu, Mar 13, 2014 at 3:44 PM, Sean Dague s...@dague.net wrote:

 In Juno I'd really be pro removing all the devstack references to git
 repos not on git.openstack.org, because these kinds of failures have
 real impact.

 Currently we have 4 repositories that fit this bill:

 SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
 NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
 RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
 SPICE_REPO=${SPICE_REPO:-
 http://anongit.freedesktop.org/git/spice/spice-html5.git}

 I think all of these probably need to be removed from devstack. We
 should be using release versions (preferably i


As for swift3 I have added an issue to the project requesting if this can
be moved to stackforge :

https://github.com/fujita/swift3/issues/62

fujita (the maint of swift3 in CC of this email)  has commented that he's
been working on it.

This is going to be quite urgent for the next openstack release, Fujita do
you need help with this?

Regards,
Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-14 Thread Stan Lagun
Joshua,

why wait? Why not just help Renat with his research on that integration and
bring your own vision to the table? Write some 1-page architecture
description on how Mistral can be built on top of TaskFlow and we discuss
pros and cons. In would be much more productive.


On Fri, Mar 14, 2014 at 11:35 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Thanks Renat,

  I'll keep waiting, and hoping that we can figure this out for everyone's
 benefit. Because in the end we are all much stronger working together and
 much weaker when not.

 Sent from my really tiny device...

 On Mar 13, 2014, at 11:41 PM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

  Folks,

  Mistral and TaskFlow are significantly different technologies. With
 different set of capabilities, with different target audience.

  We may not be doing enough to clarify all the differences, I admit that.
 The challenge here is that people tend to judge having minimal amount of
 information about both things. As always, devil in the details. Stan is
 100% right, seems is not an appropriate word here. Java seems to be
 similar to C++ at the first glance for those who have little or no
 knowledge about them.

  To be more consistent I won't be providing all the general
 considerations that I've been using so far (in etherpads, MLs, in personal
 discussions), it doesn't seem to be working well, at least not with
 everyone. So to make it better, like I said in that different thread: we're
 evaluating TaskFlow now and will share the results. Basically, it's what
 Boris said about what could and could not be implemented in TaskFlow. But
 since the very beginning of the project I never abandoned the idea of using
 TaskFlow some day when it's possible.

  So, again: Joshua, we hear you, we're working in that direction.


 I'm reminded of

 http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
 k/2http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-track/2where
  it seemed like we were doing much better collaboration, what has
 happened to break this continuity?


  Not sure why you think something is broken. We just want to finish the
 pilot with all the 'must' things working in it. This is a plan. Then we can
 revisit and change absolutely everything. Remember, to the great extent
 this is research. Joshua, this is what we talked about and agreed on many
 times. I know you might be anxious about that given the fact it's taking
 more time than planned but our vision of the project has drastically
 evolved and gone far far beyond the initial Convection proposal. So the
 initial idea of POC is no longer relevant. Even though we finished the
 first version in December, we realized it wasn't something that should have
 been shared with the community since it lacked some essential things.


  Renat Akhmerov
 @ Mirantis Inc.

   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][Designate] A question about DNSaaS?

2014-03-14 Thread Yuzhou (C)
Hi stackers,

Are there any plans about DNSaaS on the neutron roadmap?

As far as I known, Designate provides DNSaaS services for OpenStack.

Why DNSaaS is Independent service , not a network service like LBaas or VPNaaS?

Thanks,

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-14 Thread victor stinner
Hi,

I'm working for eNovance and we are working on porting OpenStack to Python 3. 
Status of the port:

  http://techs.enovance.com/6722/status-of-the-openstack-port-to-python-3-2
  https://wiki.openstack.org/wiki/Python3

I understand that it becomes late for Python 3 changes before Icehouse release, 
but I don't agree on the second part of your mail.

David wrote:
 (...) I'm not sure it makes sense to do this work piecemeal until
 we are near ready to introduce a py3 gate job.

I'm not sure that I understood correctly. You want first to see all Python 3 
tests pass, and then accept changes to fix Python 3 issues? Adding a py33 gate 
is nice, but it is almost useless before it becomes voting if nobody reads 
failures. And I don't expect that anyone will care of the py33 gate before it 
becomes voting.

It's not possible to fix all Python 3 issues at once. It requires many small 
changes which are carefully viewed and discussed. It is not possible to see all 
issues at once neither. For example, you have first to fix obvious syntax 
errors to see less trivial Python 3 issues. Changes are done incrementally, as 
other changes in OpenStack.

Yes, it's possible to reintroduce Python 3 incompatible code, but I expect much 
fewer regressions compared to the number of fixed issues.

Cyril Roelandt is improving the hacking tool to detect the most obvious cases 
of Python 3 incompatible code:

  https://review.openstack.org/#/c/80171/

We are working on clients and Olso Incubator first, but we are also preparing 
the port of servers.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nominating Ildikó Váncsa and Nadya Privalova to ceilometer-core

2014-03-14 Thread Julien Danjou
On Mon, Mar 10 2014, Eoghan Glynn wrote:

 I'd like to nominate Ildikó Váncsa and Nadya Privalova as ceilometer
 cores in recognition of their contributions([1], [2]) over the Icehouse
 cycle, and looking forward to their continued participation for Juno.

I've just added Nadya and Ildikó to the ceilometer-core group.

Welcome to both of you!

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] libvirt+Xen+OVS VLAN networking in icehouse

2014-03-14 Thread Simon Pasquier
Hi,

I've played a little with XenAPI + OVS. You might be interested by this
bug report [1] that describes a related problem I've seen in this
configuration. I'm not sure about Xen libvirt though. My assumption is
that the future-proof solution for using Xen with OpenStack is the
XenAPI driver but someone from Citrix (Bob?) may confirm.

Note also that the security groups are currently broken with libvirt +
OVS. As you noted, the iptables rules are applied directly to the OVS
port thus they are not effective (see [2] for details). There's work in
progress [3][4] to fix this critical issue. As far as the XenAPI driver
is concerned, there is another bug [5] tracking the lack of support for
security groups which should be addressed by the OVS firewall driver [6].

HTH,

Simon

[1] https://bugs.launchpad.net/neutron/+bug/1268955
[2] https://bugs.launchpad.net/nova/+bug/1112912
[3] https://review.openstack.org/21946
[4] https://review.openstack.org/44596
[5] https://bugs.launchpad.net/neutron/+bug/1245809
[6] https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

On 13/03/2014 19:35, iain macdonnell wrote:
 I've been playing with an icehouse build grabbed from fedorapeople. My
 hypervisor platform is libvirt-xen, which I understand may be
 deprecated for icehouse(?) but I'm stuck with it for now, and I'm
 using VLAN networking. It almost works, but I have a problem with
 networking. In havana, the VIF gets placed on a legacy ethernet
 bridge, and a veth pair connects that to the OVS integration bridge.
 In understand that this was done to enable iptables filtering at the
 VIF. In icehouse, the VIF appears to get placed directly on the
 integration bridge - i.e. the libvirt XML includes something like:
 
 interface type='bridge'
   mac address='fa:16:3e:e7:1e:c3'/
   source bridge='br-int'/
   script path='vif-bridge'/
   target dev='tap43b9d367-32'/
 /interface
 
 
 The problem is that the port on br-int does not have the VLAN tag.
 i.e. I'll see something like:
 
 Bridge br-int
 Port tap43b9d367-32
 Interface tap43b9d367-32
 Port qr-cac87198-df
 tag: 1
 Interface qr-cac87198-df
 type: internal
 Port int-br-bond0
 Interface int-br-bond0
 Port br-int
 Interface br-int
 type: internal
 Port tapb8096c18-cf
 tag: 1
 Interface tapb8096c18-cf
 type: internal
 
 
 If I manually set the tag using 'ovs-vsctl set port tap43b9d367-32
 tag=1', traffic starts flowing where it needs to.
 
 I've traced this back a bit through the agent code, and find that the
 bridge port is ignored by the agent because it does not have any
 external_ids (observed with 'ovs-vsctl list Interface'), and so the
 update process that normally sets the tag is not invoked. It appears
 that Xen is adding the port to the bridge, but nothing is updating it
 with the neutron-specific external_ids that the agent expects to
 see.
 
 Before I dig any further, I thought I'd ask; is this stuff supposed to
 work at this point? Is it intentional that the VIF is getting placed
 directly on the integration bridge now? Might I be missing something
 in my configuration?
 
 FWIW, I've tried the ML2 plugin as well as the legacy OVS one, with
 the same result.
 
 TIA,
 
 ~iain
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-14 Thread Alexei Kornienko

On 03/14/2014 09:37 AM, Radomir Dopieralski wrote:

Hello,

I also think that this thread is going in the wrong direction, but I
don't think the direction Boris wants is the correct one either. Frankly
I'm a little surprised that nobody mentioned another advantage that soft
delete gives us, the one that I think it was actually used for originally.

You see, soft delete is an optimization. It's there to make the system
work faster as a whole, have less code and be simpler to maintain and debug.

How does it do it, when, as clearly shown in the first post in this
thread, it makes the queries slower, requires additional indices in the
database and more logic in the queries? The answer is, by doing more
with those queries, by making you write less code, execute fewer queries
to the databases and avoid duplicating the same data in multiple places.

OpenStack is a big, distributed system of multiple databases that
sometimes rely on each other and cross-reference their records. It's not
uncommon to have some long-running operation started, that uses some
data, and then, in the middle of its execution, have that data deleted.
With soft delete, that's not a problem -- the operation can continue
safely and proceed as scheduled, with the data it was started with in
the first place -- it still has access to the deleted records as if
nothing happened. You simply won't be able to schedule another operation
like that with the same data, because it has been soft-deleted and won't
pass the validation at the beginning (or even won't appear in the UI or
CLI). This solves a lot of race conditions, error handling, additional
checks to make sure the record still exists, etc.
1) Operation in SQL are working in transactions so deleted records will 
be visible for other clients until transaction commit.
2) If someone inside the same transaction will try to use record that is 
already deleted it's definitely an error in our code and should be fixed.
I don't think that such use case can be used as an argument to keep soft 
deleted records.


Without soft delete, you need to write custom code every time to handle
the case of a record being deleted mid-operation, including all the
possible combinations of which record and when. Or you need to copy all
the relevant data in advance over to whatever is executing that
operation. This cannot be abstracted away entirely (although tools like
TaskFlow help), as this is specific to the case you are handling. And
it's not easy to find all the places where you can have a race condition
like that -- especially when you are modifying existing code that has
been relying on soft delete before. You can have bugs undetected for
years, that only appear in production, on very large deployments, and
are impossible to reproduce reliably.

There are more similar cases like that, including cascading deletes and
more advanced stuff, but I think this single case already shows that
the advantages of soft delete out-weight its disadvantages.

On 13/03/14 19:52, Boris Pavlovic wrote:

Hi all,


I would like to fix direction of this thread. Cause it is going in wrong
direction.

To assume:
1) Yes restoring already deleted recourses could be useful.
2) Current approach with soft deletion is broken by design and we should
get rid of them.

More about why I think that it is broken:
1) When you are restoring some resource you should restore N records
from N tables (e.g. VM)
2) Restoring sometimes means not only restoring DB records.
3) Not all resources should be restorable (e.g. why I need to restore
fixed_ip? or key-pairs?)


So what we should think about is:
1) How to implement restoring functionally in common way (e.g. framework
that will be in oslo)
2) Split of work of getting rid of soft deletion in steps (that I
already mention):
a) remove soft deletion from places where we are not using it
b) replace internal code where we are using soft deletion to that framework
c) replace API stuff using ceilometer (for logs) or this framework (for
restorable stuff)


To put in a nutshell: Restoring Delete resources / Delayed Deletion !=
Soft deletion.


Best regards,
Boris Pavlovic



On Thu, Mar 13, 2014 at 9:21 PM, Mike Wilson geekinu...@gmail.com
mailto:geekinu...@gmail.com wrote:

 For some guests we use the LVM imagebackend and there are times when
 the guest is deleted on accident. Humans, being what they are, don't
 back up their files and don't take care of important data, so it is
 not uncommon to use lvrestore and undelete an instance so that
 people can get their data. Of course, this is not always possible if
 the data has been subsequently overwritten. But it is common enough
 that I imagine most of our operators are familiar with how to do it.
 So I guess my saying that we do it on a regular basis is not quite
 accurate. Probably would be better to say that it is not uncommon to
 do this, but definitely not a daily task or something of that ilk.

 I have 

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-14 Thread Radomir Dopieralski
On 14/03/14 11:08, Alexei Kornienko wrote:
 On 03/14/2014 09:37 AM, Radomir Dopieralski wrote:

[snip]

 OpenStack is a big, distributed system of multiple databases that
 sometimes rely on each other and cross-reference their records. It's not
 uncommon to have some long-running operation started, that uses some
 data, and then, in the middle of its execution, have that data deleted.
 With soft delete, that's not a problem -- the operation can continue
 safely and proceed as scheduled, with the data it was started with in
 the first place -- it still has access to the deleted records as if
 nothing happened. You simply won't be able to schedule another operation
 like that with the same data, because it has been soft-deleted and won't
 pass the validation at the beginning (or even won't appear in the UI or
 CLI). This solves a lot of race conditions, error handling, additional
 checks to make sure the record still exists, etc.

 1) Operation in SQL are working in transactions so deleted records will
 be visible for other clients until transaction commit.

 2) If someone inside the same transaction will try to use record that is
 already deleted it's definitely an error in our code and should be fixed.
 I don't think that such use case can be used as an argument to keep soft
 deleted records.

Yes, that's why it works just fine when you have a single database in
one place. You can have locks, transactions, cascading operations and
all this stuff, and you have a guarantee that you are always in a
consistent state, unless there is a horrible bug.

OpenStack, however, is not a single database. There is no system-wide
solution for locks, transactions or rollbacks. Every time you reference
anything across databases, you are going to run into this problem.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-14 Thread Sean Dague
On 03/14/2014 04:49 AM, victor stinner wrote:
 Hi,
 
 I'm working for eNovance and we are working on porting OpenStack to Python 3. 
 Status of the port:
 
   http://techs.enovance.com/6722/status-of-the-openstack-port-to-python-3-2
   https://wiki.openstack.org/wiki/Python3
 
 I understand that it becomes late for Python 3 changes before Icehouse 
 release, but I don't agree on the second part of your mail.
 
 David wrote:
 (...) I'm not sure it makes sense to do this work piecemeal until
 we are near ready to introduce a py3 gate job.
 
 I'm not sure that I understood correctly. You want first to see all Python 3 
 tests pass, and then accept changes to fix Python 3 issues? Adding a py33 
 gate is nice, but it is almost useless before it becomes voting if nobody 
 reads failures. And I don't expect that anyone will care of the py33 gate 
 before it becomes voting.
 
 It's not possible to fix all Python 3 issues at once. It requires many small 
 changes which are carefully viewed and discussed. It is not possible to see 
 all issues at once neither. For example, you have first to fix obvious syntax 
 errors to see less trivial Python 3 issues. Changes are done incrementally, 
 as other changes in OpenStack.
 
 Yes, it's possible to reintroduce Python 3 incompatible code, but I expect 
 much fewer regressions compared to the number of fixed issues.

 Cyril Roelandt is improving the hacking tool to detect the most obvious cases 
 of Python 3 incompatible code:
 
   https://review.openstack.org/#/c/80171/
 
 We are working on clients and Olso Incubator first, but we are also preparing 
 the port of servers.

The issue is this generates a lot of unrelated churn and merge conflicts
with actual feature code and bug fixes.

So what we need is a game plan, which goes as follows:

1. demonstrate all requirements.txt and test-requirements.txt are
python3 compatible
1.1 if they aren't work on a remediation plan to get us there

once completed
2. come up with an audit plan on the obvious python3 issues

3. designate a focused 2 week window to land all the python3 issues and
turn on gating, we'll make it a priority review topic during that period
of time.

That 2 week window needs to happen within milestone 1 or 2 of a cycle.
After that, it's a distraction. So if the python 3 team doesn't have the
ducks in a row by then, we punt to next release.

Because I think the one of patch model on changes like this just doesn't
work, and leaves us in this very weird state of code.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Selenium (which is non-free) is back again in Horizon (Icehouse Beta 3)

2014-03-14 Thread Thomas Goirand
Hi,

A few months ago, I raised the fact that Selenium *CANNOT* be a hard
test-requirements.txt build-dependency of Horizon, because it is
non-free (because of binaries like the browser plug-ins not being
build-able from source). So it was removed.

Now, on the new Icehouse beta 3, it's back again, and I get some unit
tests errors (see below).

Guys, could we stop having this kind of regressions, and make Selenium
tests not mandatory? They aren't runnable in Debian.

Cheers,

Thomas Goirand (zigo)

==
ERROR: Failure: ImportError (No module named selenium)
--
  File openstack_dashboard/test/integration_tests/helpers.py, line 17,
in module
import selenium
ImportError: No module named selenium

==
ERROR: Failure: ImportError (No module named selenium.webdriver.common.keys)
 File openstack_dashboard/test/integration_tests/tests/test_login.py,
line 15, in module
import selenium.webdriver.common.keys as keys

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Manual scheduling nodes in maintenance mode

2014-03-14 Thread Lucas Alvares Gomes
On Wed, Mar 12, 2014 at 8:07 PM, Chris Jones c...@tenshu.net wrote:


 Hey

 I wanted to throw out an idea that came to me while I was working on
 diagnosing some hardware issues in the Tripleo CD rack at the sprint last
 week.

 Specifically, if a particular node has been dropped from automatic
 scheduling by the operator, I think it would be super useful to be able to
 still manually schedule the node. Examples might be that someone is
 diagnosing a hardware issue and wants to boot an image that has all their
 favourite diagnostic tools in it, or they might be booting an image they
 use for updating firmwares, etc (frankly, just being able to boot a
 generic, unmodified host OS on a node can be super useful if you're trying
 to crash cart the machine for something hardware related).

 Any thoughts? :)


+1 I like the idea and think it's quite useful.

Drivers in Ironic already expose a rescue interface[1] (which I don't think
we had put much thoughts into it yet) perhaps the PXE driver might
implement something similar to what you want to do here?

[1]
https://github.com/openstack/ironic/blob/master/ironic/drivers/base.py#L60
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Selenium (which is non-free) is back again in Horizon (Icehouse Beta 3)

2014-03-14 Thread Sascha Peilicke




Am 14. März 2014 12:32:41 schrieb Thomas Goirand z...@debian.org:


Hi,

A few months ago, I raised the fact that Selenium *CANNOT* be a hard
test-requirements.txt build-dependency of Horizon, because it is
non-free (because of binaries like the browser plug-ins not being
build-able from source). So it was removed.

Now, on the new Icehouse beta 3, it's back again, and I get some unit
tests errors (see below).

Guys, could we stop having this kind of regressions, and make Selenium
tests not mandatory? They aren't runnable in Debian.


Identical situation with openSUSE. And I guess Fedora is no different.



Cheers,

Thomas Goirand (zigo)

==
ERROR: Failure: ImportError (No module named selenium)
--
  File openstack_dashboard/test/integration_tests/helpers.py, line 17,
in module
import selenium
ImportError: No module named selenium

==
ERROR: Failure: ImportError (No module named selenium.webdriver.common.keys)
 File openstack_dashboard/test/integration_tests/tests/test_login.py,
line 15, in module
import selenium.webdriver.common.keys as keys

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Viele Grüße,
Sascha Peilicke



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-14 Thread Dina Belova
Russell, first of all thanks for your opinion and taking part in this
discussion.

 What we need to dig in to is *why* do you feel it needs to be global?

 I'm trying to understand what you're saying here ... do you mean that
 since we're trying to get to where there's a global scheduler, that it
 makes sense there should be a central point for this, even if the API is
 through the existing compute/networking/storage APIs?

 If so, I think that makes sense.  However, until we actually have
 something for scheduling, I think we should look at implementing all of
 this in the services, and perhaps share some code with a Python library.

Well, let me give you some reasons I'm thinking about speaking about
separated service with its own endpoints, etc.

* as you said, we propose different resources reservations to be
implemented for OS, and compute resources (VMs and hosts) are not only ones
for that;
* there should be support of time management, checking leases statuses,
sending user notifications, etc. - even if that'll be implemented as
library, it'll need separately running service in Nova, because there will
be some specific periodic tasks and so on. Of course, that might be part of
nova-scheduler, but in this case such things as sending notifications will
look strange here - and that will allow to manage only VMs, not hosts, at
least if we are speaking about traditional Nova scheduling process;
* and the last: previous points might be implemented as some library, and
work ok, I quite agree with you here. Although in this case there will be
no centralised point of leases management, as there is no one point for
quotas management now. And if for quotas that's uncomfortable to manage
them in case of huge clouds, for leases it will be simply impossible to
have one picture of what will be going with all resources in future - as
there are many things to keep track on - compute capacity, storage
capacity, etc.

The last point seems most important for me, as the idea of centralised
resource time management looks better for me than idea of each service
having simply the same code working on its own reservation, plus the fact
we consider that some scheduling dependencies could happen with
heterogenous resources like reserving a volume with an instance booting on
it. I quite agree that for user it'll be more comfortable to use services
as is, and as Sylvain said, that might implemented quite nice due to, for
example, Nova extensions (as it's done now for VM reservations). But at the
same moment all logic related to leases will be in one place, allowing
cloud administrators manage cloud capacity usage in time from one place.

And I'm not talking about additional load to core reviewers of all projects
in case of implementing that feature in every single project, although
there is already existing team on Climate. That's not the main thing.

As said that's my personal opinion, and I'll be really glad to discuss this
problem and solve it in the best way chosen by community with taking into
account different points of view and ideas.

Thanks



On Thu, Mar 13, 2014 at 6:44 PM, Russell Bryant rbry...@redhat.com wrote:

 On 03/12/2014 12:14 PM, Sylvain Bauza wrote:
  Hi Russell,
  Thanks for replying,
 
 
  2014-03-12 16:46 GMT+01:00 Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com:
  The biggest concern seemed to be that we weren't sure whether Climate
  makes sense as an independent project or not.  We think it may make
 more
  sense to integrate what Climate does today into Nova directly.  More
  generally, we think reservations of resources may best belong in the
  APIs responsible for managing those resources, similar to how quota
  management for resources lives in the resource APIs.
 
  There is some expectation that this type of functionality will extend
  beyond Nova, but for that we could look at creating a shared library
 of
  code to ease implementing this sort of thing in each API that needs
 it.
 
 
 
  That's really a good question, so maybe I could give some feedback on
  how we deal with the existing use-cases.
  About the possible integration with Nova, that's already something we
  did for the virtual instances use-case, thanks to an API extension
  responsible for checking if a scheduler hint called 'reservation' was
  spent, and if so, take use of the python-climateclient package to send a
  request to Climate.
 
  I truly agree with the fact that possibly users should not use a
  separate API for reserving resources, but that would be worth duty for
  the project itself (Nova, Cinder or even Heat). That said, we think that
  there is need for having a global ordonancer managing resources and not
  siloing the resources. Hence that's why we still think there is still a
  need for a Climate Manager.

 What we need to dig in to is *why* do you feel it needs to be global?

 I'm trying to understand what you're saying here ... do you mean that
 since we're 

Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-14 Thread Renat Akhmerov
Take a look at method get_pecan_config() in mistral/api/app.py. It’s where you 
can pass any parameters into pecan app (see a dictionary ‘cfg_dict’ 
initialization). They can be then accessed via pecan.conf as described here: 
http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
 If I understood the problem correctly this should be helpful.

Renat Akhmerov
@ Mirantis Inc.



On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:

 We have access to all configuration parameters in the context of api.py. May 
 be you don't pass it but just instantiate it where you need it? Or I may 
 misunderstand what you're trying to do...
 
 DZ 
 
 PS: can you generate and update mistral.config.example to include new oslo 
 messaging options? I forgot to mention it on review on time. 
 
 
 On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
 On the transport variable, the problem I see isn't with passing the variable 
 to the engine and executor.  It's passing the transport into the API layer.  
 The API layer is a pecan app and I currently don't see a way where the 
 transport variable can be passed to it directly.  I'm looking at 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50 and 
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.  
 Do you have any suggestion?  Thanks. 
 
 
 On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
 I can write a method in base test to start local executor.  I will do that 
 as a separate bp.  
 Ok.
 
 After the engine is made standalone, the API will communicate to the engine 
 and the engine to the executor via the oslo.messaging transport.  This 
 means that for the local option, we need to start all three components 
 (API, engine, and executor) on the same process.  If the long term goal as 
 you stated above is to use separate launchers for these components, this 
 means that the API launcher needs to duplicate all the logic to launch the 
 engine and the executor. Hence, my proposal here is to move the logic to 
 launch the components into a common module and either have a single generic 
 launch script that launch specific components based on the CLI options or 
 have separate launch scripts that reference the appropriate launch function 
 from the common module.
 
 Ok, I see your point. Then I would suggest we have one script which we could 
 use to run all the components (any subset of of them). So for those 
 components we specified when launching the script we use this local 
 transport. Btw, scheduler eventually should become a standalone component 
 too, so we have 4 components.
 
 The RPC client/server in oslo.messaging do not determine the transport.  
 The transport is determine via oslo.config and then given explicitly to the 
 RPC client/server.  
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
  and 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
  are examples for the client and server respectively.  The in process Queue 
 is instantiated within this transport object from the fake driver.  For the 
 local option, all three components need to share the same transport in 
 order to have the Queue in scope. Thus, we will need some method to have 
 this transport object visible to all three components and hence my proposal 
 to use a global variable and a factory method. 
 I’m still not sure I follow your point here.. Looking at the links you 
 provided I see this:
 
 transport = messaging.get_transport(cfg.CONF)
 
 So my point here is we can make this call once in the launching script and 
 pass it to engine/executor (and now API too if we want it to be launched by 
 the same script). Of course, we’ll have to change the way how we initialize 
 these components, but I believe we can do it. So it’s just a dependency 
 injection. And in this case we wouldn’t need to use a global variable. Am I 
 still missing something?
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-14 Thread Renat Akhmerov
On 14 Mar 2014, at 05:00, Dmitri Zimine d...@stackstorm.com wrote:

 - Async actions: how do results of async action communicates back? 
 My understanding it it assumes that the remote system will call back to 
 mistral with action execution id, it's on the engine to call-back, and action 
 needs to let the engine know to expect call-back. Let's put the explanation 
 here. 

Yes. When I was writing this I thought it was a little bit out of the scope. 
Basically, in API there’s a method which can be used to deliver a result once 
it’s been calculated. And then engine does what’s needed to update task state 
and advance the workflow. Currently, it’s only possible to do it via REST API 
and we also have a requirement to do it via MQ (it can be configurable).

   - is_sync() - consider using an attribute instead -  @mistral.async

Well, I had an idea that it may depend on how a particular action instance is 
parameterized (in other words, a dynamic thing rather than static property). It 
would just give more flexibility

   - can we think of a way to unify sync and async actions from engine's 
 standpoint? So that we don't special-case it in the engine?

To be precise, engine has no knowledge about it. Only executor does and it has 
to but the difference is pretty small. In case if action is sync it should just 
call the API method I mentioned above to deliver action result. When we finish 
our BPs related to oslo.messaging it’ll be working over it.

 @ Joshua - does something similar exists in TaskFlow already?

As far as I know, it doesn’t exist in TaskFlow but I may be wrong.. 

 - def dry_run() - maybe name test, let's stress that this method should 
 return a representative sample output. 

Ok, I like that. Will change this part.

 - Input - need a facility to declare, validate and list input parameters. 
 Like VALID_KEYS=['url', 'parameters''] , def validate(): 
 
 - class HTTPAction(object):
 def __init__(self, url, params, method, headers, body):
 Not happy about declaring parameters explicitly. How about using * args 
 **kvargs, or 'parameters' dictionary? 

Yes, these two items are related. I think it’s a matter of responsibility 
distribution. Whether we want validation to happen in action factory or in 
action itself. I tend to agree with you on this one that action class should be 
responsible itself for validation. Let’s think more through that.

 - DSL In-Place Declaration - I did minor edits in the section, please check. 

Ok. Thanks.


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Selenium (which is non-free) is back again in Horizon (Icehouse Beta 3)

2014-03-14 Thread Julie Pichon
On 14/03/14 11:31, Thomas Goirand wrote:
 Hi,
 
 A few months ago, I raised the fact that Selenium *CANNOT* be a hard
 test-requirements.txt build-dependency of Horizon, because it is
 non-free (because of binaries like the browser plug-ins not being
 build-able from source). So it was removed.
 
 Now, on the new Icehouse beta 3, it's back again, and I get some unit
 tests errors (see below).
 
 Guys, could we stop having this kind of regressions, and make Selenium
 tests not mandatory? They aren't runnable in Debian.

We're aware of the issue, a fix [1] was merged in RC1 if that helps.

Julie

[1] https://bugs.launchpad.net/horizon/+bug/1289270

 
 Cheers,
 
 Thomas Goirand (zigo)
 
 ==
 ERROR: Failure: ImportError (No module named selenium)
 --
   File openstack_dashboard/test/integration_tests/helpers.py, line 17,
 in module
 import selenium
 ImportError: No module named selenium
 
 ==
 ERROR: Failure: ImportError (No module named selenium.webdriver.common.keys)
  File openstack_dashboard/test/integration_tests/tests/test_login.py,
 line 15, in module
 import selenium.webdriver.common.keys as keys
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] os-cloud-config ssh access to cloud

2014-03-14 Thread Jiří Stránský

On 12.3.2014 17:03, Jiří Stránský wrote:


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] -
upload pre-created signing cert, signing key and CA cert to controller
nodes using Heat. This seems like a much cleaner approach to
initializing overcloud than having to SSH into it, and it will solve
both problems i outlined in the initial e-mail.

It creates another problem though - for simple (think PoC) deployments
without external CA we'll need to create the keys/certs
somehow/somewhere anyway :) It shouldn't be hard because it's already
implemented in keystone-manage pki_setup but we should figure out a way
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally
and passing a parameter to pki_setup to override default location where
new keys/certs will be generated?


Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/



I'm adding [Heat] to the subject. After some discussion on IRC it seems 
that what we need to do with Heat is not totally straightforward.


Here's an attempt at a brief summary:

In TripleO we deploy OpenStack using Heat, the cloud is described in a 
Heat template [1]. We want to externally generate and then upload 3 
small binary files to the controller nodes (Keystone PKI key and 
certificates [2]). We don't want to generate them in place or scp them 
into the controller nodes, because that would require having ssh access 
to the deployed controller nodes, which comes with drawbacks [3].


It would be good if we could have the 3 binary files put into the 
controller nodes as part of the Heat stack creation. Can we include them 
in the template somehow? Or is there an alternative feasible approach?



Thank you

Jirka

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/0490dd665899d3265a72965aeaf3a342275f4328/overcloud-source.yaml
[2] 
http://docs.openstack.org/developer/keystone/configuration.html#install-external-signing-certificate
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029327.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] os-cloud-config ssh access to cloud

2014-03-14 Thread Steven Dake

On 03/14/2014 06:33 AM, Jiří Stránský wrote:

On 12.3.2014 17:03, Jiří Stránský wrote:


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] -
upload pre-created signing cert, signing key and CA cert to controller
nodes using Heat. This seems like a much cleaner approach to
initializing overcloud than having to SSH into it, and it will solve
both problems i outlined in the initial e-mail.

It creates another problem though - for simple (think PoC) deployments
without external CA we'll need to create the keys/certs
somehow/somewhere anyway :) It shouldn't be hard because it's already
implemented in keystone-manage pki_setup but we should figure out a way
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally
and passing a parameter to pki_setup to override default location where
new keys/certs will be generated?


Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/



I'm adding [Heat] to the subject. After some discussion on IRC it 
seems that what we need to do with Heat is not totally straightforward.


Here's an attempt at a brief summary:

In TripleO we deploy OpenStack using Heat, the cloud is described in a 
Heat template [1]. We want to externally generate and then upload 3 
small binary files to the controller nodes (Keystone PKI key and 
certificates [2]). We don't want to generate them in place or scp them 
into the controller nodes, because that would require having ssh 
access to the deployed controller nodes, which comes with drawbacks [3].


It would be good if we could have the 3 binary files put into the 
controller nodes as part of the Heat stack creation. Can we include 
them in the template somehow? Or is there an alternative feasible 
approach?



Jirka,

You can inject files via the heat-cfntools agents.  Check out:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-files

You could also use raw cloudinit data to inject a files section.

There may be a final option with software config, but I'm not certain if 
software config has grown a feature to inject files yet.


Regards
-steve



Thank you

Jirka

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/0490dd665899d3265a72965aeaf3a342275f4328/overcloud-source.yaml
[2] 
http://docs.openstack.org/developer/keystone/configuration.html#install-external-signing-certificate
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029327.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] os-cloud-config ssh access to cloud

2014-03-14 Thread Jiří Stránský

On 14.3.2014 14:42, Steven Dake wrote:

On 03/14/2014 06:33 AM, Jiří Stránský wrote:

On 12.3.2014 17:03, Jiří Stránský wrote:


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] -
upload pre-created signing cert, signing key and CA cert to controller
nodes using Heat. This seems like a much cleaner approach to
initializing overcloud than having to SSH into it, and it will solve
both problems i outlined in the initial e-mail.

It creates another problem though - for simple (think PoC) deployments
without external CA we'll need to create the keys/certs
somehow/somewhere anyway :) It shouldn't be hard because it's already
implemented in keystone-manage pki_setup but we should figure out a way
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally
and passing a parameter to pki_setup to override default location where
new keys/certs will be generated?


Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/



I'm adding [Heat] to the subject. After some discussion on IRC it
seems that what we need to do with Heat is not totally straightforward.

Here's an attempt at a brief summary:

In TripleO we deploy OpenStack using Heat, the cloud is described in a
Heat template [1]. We want to externally generate and then upload 3
small binary files to the controller nodes (Keystone PKI key and
certificates [2]). We don't want to generate them in place or scp them
into the controller nodes, because that would require having ssh
access to the deployed controller nodes, which comes with drawbacks [3].

It would be good if we could have the 3 binary files put into the
controller nodes as part of the Heat stack creation. Can we include
them in the template somehow? Or is there an alternative feasible
approach?


Jirka,

You can inject files via the heat-cfntools agents.  Check out:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-files

You could also use raw cloudinit data to inject a files section.

There may be a final option with software config, but I'm not certain if
software config has grown a feature to inject files yet.

Regards
-steve



Are these approaches subject to size limits? In the IRC discussion a 
limit of 16 KB came up (i assumed total, not per-file), which could be a 
problem in theory. The files `keystone-manage pki_setup` generated for 
me were about 7.2 KB which gives about 10 KB when encoded as base64. So 
we wouldn't be over the limit but it's not exactly comfortable either 
(if that 16 KB limit still applies).


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-14 Thread Luke Gorrie
Howdy!

Here's some follow-up on setting up devstack-vm-gate as a 3rd party.

On 13 March 2014 15:30, Luke Gorrie l...@tail-f.com wrote:

 1. I need to enable an ML2 mech driver. How can I do this? I have been
 trying to create a localrc with a Q_ML2_PLUGIN_MECHANISM_DRIVERS=...
 line, but it appears that the KEEP_LOCALRC option in devstack-gate is
 broken (confirmed on #openstack-infra).



2. How do I streamline which tests are run? I tried added export
 DEVSTACK_GATE_TEMPEST_REGEX=network in the Jenkins job configuration but I
 don't see any effect. (word on #openstack-infra is this option is not used
 by them so status unknown.)


Now we have diagnosed (on #openstack-qa) and submitted fixes to
devstack-gate for both of these problems.

Links: https://review.openstack.org/#/c/80359/ (for localrc) and
https://review.openstack.org/#/c/80566/ (for regex).

3. How do I have Jenkins copy the log files into a directory on the Jenkins
 master node (that I can serve up with Apache)? This is left as an exercise
 to the reader in the blog tutorial but I would love a cheat, since I am
 getting plenty of exercise already :-).


This is still open for me. I have some tips from IRC (thanks Jay) but I
haven't been able to make them work yet.


 I also have the meta-question: How can I test changes/fixes to
 devstack-gate?


We found a solution for this now. If you add this line to the Jenkins job:

  export SKIP_DEVSTACK_GATE_PROJECT=1

then I can edit /opt/stack/new/devstack-gate/devstack-vm-gate.sh without it
being overwritten on each test run. That makes it possible to do work on
the script. (Important: remember to also remove the lines from the Jenkins
job that do a git reset --hard to HEAD.)

I also have an issue that worries me. I once started seeing tempest tests
 failing due to a resource leak where the kernel ran out of loopback mounts
 and that broke tempest.


This issue hasn't popped up again.

Overall it's fun to be able to hang out on IRC and make improvements to the
OpenStack infrastructure tools. On the other hand, I've now invested about
a week of effort and I still don't have the basic devstack-vm-gate working
reliably, let alone testing the driver that I am interested in. So I find
it's a bit tough as a small vendor to comply with the new CI rules. Lack of
familiarity with the overall toolchain + 30-minute turnaround time on
testing each change really kills my productivity.

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] os-cloud-config ssh access to cloud

2014-03-14 Thread Steven Dake

On 03/14/2014 07:14 AM, Jiří Stránský wrote:

On 14.3.2014 14:42, Steven Dake wrote:

On 03/14/2014 06:33 AM, Jiří Stránský wrote:

On 12.3.2014 17:03, Jiří Stránský wrote:


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] -
upload pre-created signing cert, signing key and CA cert to controller
nodes using Heat. This seems like a much cleaner approach to
initializing overcloud than having to SSH into it, and it will solve
both problems i outlined in the initial e-mail.

It creates another problem though - for simple (think PoC) deployments
without external CA we'll need to create the keys/certs
somehow/somewhere anyway :) It shouldn't be hard because it's already
implemented in keystone-manage pki_setup but we should figure out a 
way
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup 
locally
and passing a parameter to pki_setup to override default location 
where

new keys/certs will be generated?


Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/



I'm adding [Heat] to the subject. After some discussion on IRC it
seems that what we need to do with Heat is not totally straightforward.

Here's an attempt at a brief summary:

In TripleO we deploy OpenStack using Heat, the cloud is described in a
Heat template [1]. We want to externally generate and then upload 3
small binary files to the controller nodes (Keystone PKI key and
certificates [2]). We don't want to generate them in place or scp them
into the controller nodes, because that would require having ssh
access to the deployed controller nodes, which comes with drawbacks 
[3].


It would be good if we could have the 3 binary files put into the
controller nodes as part of the Heat stack creation. Can we include
them in the template somehow? Or is there an alternative feasible
approach?


Jirka,

You can inject files via the heat-cfntools agents.  Check out:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-files 



You could also use raw cloudinit data to inject a files section.

There may be a final option with software config, but I'm not certain if
software config has grown a feature to inject files yet.

Regards
-steve



Are these approaches subject to size limits? In the IRC discussion a 
limit of 16 KB came up (i assumed total, not per-file), which could be 
a problem in theory. The files `keystone-manage pki_setup` generated 
for me were about 7.2 KB which gives about 10 KB when encoded as 
base64. So we wouldn't be over the limit but it's not exactly 
comfortable either (if that 16 KB limit still applies).


Thanks

Jirka


Jirka,

Yes these are limited by the metadata limit size, which on last 
inspection of the nova db code was approximately 16kb.


If software config supports file injection, it may not be subject to 
size limits.  Steve baker would have a better handle on that.


Another option is to load the data in swift and access from inside the 
vm on boot.  I know that is kind of hacky.  I think the default limit 
for nova is too small, but the data consumption adds up per vm in the 
database.


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][Designate] A question about DNSaaS?

2014-03-14 Thread Mac Innes, Kiall
On Fri, 2014-03-14 at 08:44 +, Yuzhou (C) wrote:
 Hi stackers,
 
 Are there any plans about DNSaaS on the neutron roadmap?
 
 As far as I known, Designate provides DNSaaS services for
 OpenStack.
 
 Why DNSaaS is Independent service , not a network service like LBaas
 or VPNaaS?
 
 Thanks,
 
 Zhou Yu

I personally see DNSaaS as being outside the scope of Neutron. VPNaaS
and LBaaS are, at their core, about moving bits from A - B. DNSaaS is
different, as it plays no part in how bits are moved from A - B.

Neutron's 1 line intro is:

Neutron is an OpenStack project to provide networking as a service
between interface devices (e.g., vNICs) managed by other Openstack
services (e.g., nova).

I'm not sure I could squeeze authoritative DNS into that scope without
changing it :)

Thanks,
Kiall


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Selenium (which is non-free) is back again in Horizon (Icehouse Beta 3)

2014-03-14 Thread Thomas Goirand
On 03/14/2014 09:11 PM, Julie Pichon wrote:
 On 14/03/14 11:31, Thomas Goirand wrote:
 Hi,

 A few months ago, I raised the fact that Selenium *CANNOT* be a hard
 test-requirements.txt build-dependency of Horizon, because it is
 non-free (because of binaries like the browser plug-ins not being
 build-able from source). So it was removed.

 Now, on the new Icehouse beta 3, it's back again, and I get some unit
 tests errors (see below).

 Guys, could we stop having this kind of regressions, and make Selenium
 tests not mandatory? They aren't runnable in Debian.
 
 We're aware of the issue, a fix [1] was merged in RC1 if that helps.
 
 Julie
 
 [1] https://bugs.launchpad.net/horizon/+bug/1289270

Hi Julie,

This does help a lot indeed. Thanks!

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-14 Thread Brant Knudson
On Wed, Mar 12, 2014 at 11:03 AM, Duncan Thomas duncan.tho...@gmail.comwrote:

 On 15 January 2014 18:53, Brant Knudson b...@acm.org wrote:

  At no point do I care what are the different commits that are being
 brought
  in from oslo-incubator. If the commits are listed in the commit message
 then
  I feel an obligation to verify that they got the right commits in the
  message and that takes extra time for no gain.

 I find that I very much *do* want a list of what changes have been
 pulled in, so I've so idea of the intent of the changes. Some of the
 OSLO changes can be large and complicated, and the more clues as to
 why things changed, the better the chance I've got of spotting
 breakages or differing assumptions between cinder and OSLO (of which
 there have been a number)

 I don't very often verify that the version that has been pulled in is
 the very latest or anything like that - generally I want to know:


One thing that I think we should be verifying is that the changes being
brought over have actually been committed to oslo-incubator. I'm sure there
have been times where someone eager to get the fix in has not waited for
the oslo-incubator merge before syncing their change over.

 - What issue are you trying to fix by doing an update? (The fact OSLO
 is ahead of us is rarely a good enough reason on its own to do an
 update - preferably reference a specific bug that exists in cinder)


When I sync a change from oslo-incubator to fix a bug I put Closes-Bug on
the commit message to indicate what bug is being fixed. If the sync tool
was enhanced to pick out the *-Bug references from the oslo commits to
include in the sync commit message that would be handy.


  - What other incidental changes are being pulled in (by intent, not
 just the code)
  - If I'm unsure about one of the incidental changes, how do I go find
 the history for it, with lots of searching (hense the commit ID or the
 change ID) - this lets me find bugs, reviews etc


How does one get the list of commits that are being brought over from
oslo-incubator? You'd have to know what the previous commit was that was
synced.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-03-14 Thread Dina Belova
Hello stackers!

Thanks everyone who took part in our meeting :)

Meeting minutes are:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-14-15.09.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-14-15.09.txt
Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-14-15.09.log.html


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-14 Thread Changbin Liu
As a technical person, I would love to hear the major/significant/big
differences between Mistral and TaskFlow.

Last October I read this blog
http://www.mirantis.com/blog/announcing-mistral-task-flow-as-a-service/ ,
and also saw ML/IRC communications, but still could not quite figure out
the grand/new vision of Mistral. Not to mention that vision keeps evolving
rapidly as mentioned by Renat.

Please enlighten me.


Thanks

Changbin


On Fri, Mar 14, 2014 at 2:33 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Folks,

 Mistral and TaskFlow are significantly different technologies. With
 different set of capabilities, with different target audience.

 We may not be doing enough to clarify all the differences, I admit that.
 The challenge here is that people tend to judge having minimal amount of
 information about both things. As always, devil in the details. Stan is
 100% right, seems is not an appropriate word here. Java seems to be
 similar to C++ at the first glance for those who have little or no
 knowledge about them.

 To be more consistent I won't be providing all the general considerations
 that I've been using so far (in etherpads, MLs, in personal discussions),
 it doesn't seem to be working well, at least not with everyone. So to make
 it better, like I said in that different thread: we're evaluating TaskFlow
 now and will share the results. Basically, it's what Boris said about what
 could and could not be implemented in TaskFlow. But since the very
 beginning of the project I never abandoned the idea of using TaskFlow some
 day when it's possible.

 So, again: Joshua, we hear you, we're working in that direction.


 I'm reminded of

 http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
 k/2http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-track/2where
  it seemed like we were doing much better collaboration, what has
 happened to break this continuity?


 Not sure why you think something is broken. We just want to finish the
 pilot with all the 'must' things working in it. This is a plan. Then we can
 revisit and change absolutely everything. Remember, to the great extent
 this is research. Joshua, this is what we talked about and agreed on many
 times. I know you might be anxious about that given the fact it's taking
 more time than planned but our vision of the project has drastically
 evolved and gone far far beyond the initial Convection proposal. So the
 initial idea of POC is no longer relevant. Even though we finished the
 first version in December, we realized it wasn't something that should have
 been shared with the community since it lacked some essential things.


 Renat Akhmerov
 @ Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-14 Thread Jay Pipes
On Fri, 2014-03-14 at 08:37 +0100, Radomir Dopieralski wrote:
 Hello,
 
 I also think that this thread is going in the wrong direction, but I
 don't think the direction Boris wants is the correct one either. Frankly
 I'm a little surprised that nobody mentioned another advantage that soft
 delete gives us, the one that I think it was actually used for originally.
 
 You see, soft delete is an optimization. It's there to make the system
 work faster as a whole, have less code and be simpler to maintain and debug.
 
 How does it do it, when, as clearly shown in the first post in this
 thread, it makes the queries slower, requires additional indices in the
 database and more logic in the queries?

I feel it isn't an optimization if:

* It slows down the code base
* Makes the code harder to read and understand
* Deliberately obscures the actions of removing and restoring resources
* Encourages the idea that everything in the system is undoable, like
the cloud is a Word doc.

  The answer is, by doing more
 with those queries, by making you write less code, execute fewer queries
 to the databases and avoid duplicating the same data in multiple places.

Fewer queries does not aklways make faster code, nor does it lead to
inherently race-free code.

 OpenStack is a big, distributed system of multiple databases that
 sometimes rely on each other and cross-reference their records. It's not
 uncommon to have some long-running operation started, that uses some
 data, and then, in the middle of its execution, have that data deleted.
 With soft delete, that's not a problem -- the operation can continue
 safely and proceed as scheduled, with the data it was started with in
 the first place -- it still has access to the deleted records as if
 nothing happened.

I believe a better solution would be to use Boris' solution and
implement safeguards around the delete operation. For instance, not
being able to delete an instance that has tasks still running against
it. Either that, or implement true task abortion logic that can notify
distributed components about the need to stop a running task because
either the user wants to delete a resource or simply cancel the
operation they began.

  You simply won't be able to schedule another operation
 like that with the same data, because it has been soft-deleted and won't
 pass the validation at the beginning (or even won't appear in the UI or
 CLI). This solves a lot of race conditions, error handling, additional
 checks to make sure the record still exists, etc.

Sorry, I disagree here. Components that rely on the soft-delete behavior
to get the resource data from the database should instead respond to a
NotFound that gets raised by aborting their running task.

 Without soft delete, you need to write custom code every time to handle
 the case of a record being deleted mid-operation, including all the
 possible combinations of which record and when.

Not custom code. Explicit code paths for explicit actions.

  Or you need to copy all
 the relevant data in advance over to whatever is executing that
 operation. 

This is already happening.

 This cannot be abstracted away entirely (although tools like
 TaskFlow help), as this is specific to the case you are handling. And
 it's not easy to find all the places where you can have a race condition
 like that -- especially when you are modifying existing code that has
 been relying on soft delete before. You can have bugs undetected for
 years, that only appear in production, on very large deployments, and
 are impossible to reproduce reliably.
 
 There are more similar cases like that, including cascading deletes and
 more advanced stuff, but I think this single case already shows that
 the advantages of soft delete out-weight its disadvantages.

I respectfully disagree :) I think the benefits of explicit code paths
and increased performance of the database outweigh the costs of changing
existing code.

Best,
-jay

 On 13/03/14 19:52, Boris Pavlovic wrote:
  Hi all, 
  
  
  I would like to fix direction of this thread. Cause it is going in wrong
  direction. 
  
  To assume:
  1) Yes restoring already deleted recourses could be useful. 
  2) Current approach with soft deletion is broken by design and we should
  get rid of them. 
  
  More about why I think that it is broken: 
  1) When you are restoring some resource you should restore N records
  from N tables (e.g. VM)
  2) Restoring sometimes means not only restoring DB records. 
  3) Not all resources should be restorable (e.g. why I need to restore
  fixed_ip? or key-pairs?)
  
  
  So what we should think about is:
  1) How to implement restoring functionally in common way (e.g. framework
  that will be in oslo) 
  2) Split of work of getting rid of soft deletion in steps (that I
  already mention):
  a) remove soft deletion from places where we are not using it
  b) replace internal code where we are using soft deletion to that framework 
  c) replace API stuff using 

Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-14 Thread Jay S Bryant
From:   Brant Knudson b...@acm.org
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   03/14/2014 10:26 AM
Subject:Re: [openstack-dev] [Oslo] Improving oslo-incubator 
update.py





On Wed, Mar 12, 2014 at 11:03 AM, Duncan Thomas duncan.tho...@gmail.com 
wrote:
On 15 January 2014 18:53, Brant Knudson b...@acm.org wrote:

 At no point do I care what are the different commits that are being 
brought
 in from oslo-incubator. If the commits are listed in the commit message 
then
 I feel an obligation to verify that they got the right commits in the
 message and that takes extra time for no gain.

-- Duncan, It is important to know what commits are being brought over to 
help provide a pointer to
-- the possible cause of subsequent bugs that arise.  I.E. if we sync up 
the DB, there is a commit for fixing
-- db connection order and suddenly we are getting intermittent DB 
connection failures, it give us
-- a starting point to fixing the issue.

I find that I very much *do* want a list of what changes have been
pulled in, so I've so idea of the intent of the changes. Some of the
OSLO changes can be large and complicated, and the more clues as to
why things changed, the better the chance I've got of spotting
breakages or differing assumptions between cinder and OSLO (of which
there have been a number)

I don't very often verify that the version that has been pulled in is
the very latest or anything like that - generally I want to know:

One thing that I think we should be verifying is that the changes being 
brought over have actually been committed to oslo-incubator. I'm sure 
there have been times where someone eager to get the fix in has not waited 
for the oslo-incubator merge before syncing their change over.

 - What issue are you trying to fix by doing an update? (The fact OSLO
is ahead of us is rarely a good enough reason on its own to do an
update - preferably reference a specific bug that exists in cinder)

When I sync a change from oslo-incubator to fix a bug I put Closes-Bug on 
the commit message to indicate what bug is being fixed. If the sync tool 
was enhanced to pick out the *-Bug references from the oslo commits to 
include in the sync commit message that would be handy.
 
 - What other incidental changes are being pulled in (by intent, not
just the code)
 - If I'm unsure about one of the incidental changes, how do I go find
the history for it, with lots of searching (hense the commit ID or the
change ID) - this lets me find bugs, reviews etc

How does one get the list of commits that are being brought over from 
oslo-incubator? You'd have to know what the previous commit was that was 
synced.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-14 Thread Miguel Angel Ajo

As we said on the Thursday meeting, I've filled a bug with the details

https://bugs.launchpad.net/neutron/+bug/1292598

Feel free to add / ask for any missing details.

Best,
Miguel Ángel.

On 03/13/2014 10:52 PM, Carl Baldwin wrote:

Right, the L3 agent does do this already.  Agreed that the limiting
factor is the cumulative effect of the wrappers and executables' start
up overhead.

Carl

On Thu, Mar 13, 2014 at 9:47 AM, Brian Haley brian.ha...@hp.com wrote:

Aaron,

I thought the l3-agent already did this if doing a full sync?

_sync_routers_task()-_process_routers()-spawn_n(self.process_router, ri)

So each router gets processed in a greenthread.

It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
limiting factor, at least on network nodes with large numbers of namespaces.

-Brian

On 03/13/2014 10:48 AM, Aaron Rosen wrote:

The easiest/quickest thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason. See:
https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.

Best,

Aaron

On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo majop...@redhat.com
mailto:majop...@redhat.com wrote:

 Yuri, could you elaborate your idea in detail? , I'm lost at some
 points with your unix domain / token authentication.

 Where does the token come from?,

 Who starts rootwrap the first time?

 If you could write a full interaction sequence, on the etherpad, from
 rootwrap daemon start ,to a simple call to system happening, I think that'd
 help my understanding.


Here it is: https://etherpad.openstack.org/p/rootwrap-agent
Please take a look.

--

Kind regards, Yuriy.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-14 Thread Duncan Thomas
On 14 March 2014 16:10, Jay S Bryant jsbry...@us.ibm.com wrote:
 -- Duncan, It is important to know what commits are being brought over to
 help provide a pointer to
 -- the possible cause of subsequent bugs that arise.  I.E. if we sync up
 the DB, there is a commit for fixing
 -- db connection order and suddenly we are getting intermittent DB
 connection failures, it give us
 -- a starting point to fixing the issue.


Jay, there's been a mix-up in who's saying what here. I *very much*
want to know what commits are being bought over. For slightly
different reasons (I'm mostly wanting them for easy review, you for
bug fixing). Brant is suggesting that just the last commit ID is
enough, which I disagree with (and will continue to hit -1/-2 for).

If somebody was to improve the import script to do this automatically
that would be great. Currently I can't see an easy way of
programatically telling when the last import was - I'll take another
look at the problem if somebody smarter than me doesn't sort it first

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-14 Thread Duncan Thomas
On 13 March 2014 21:13, Roman Podoliaka rpodoly...@mirantis.com wrote:
 Hi Steven,

 Code from openstack/common/ dir is 'synced' from oslo-incubator. The
 'sync' is effectively a copy of oslo-incubator subtree into a project
 source tree. As syncs are not done at the same time, the code of
 synced modules may indeed by different for each project depending on
 which commit of oslo-incubator was synced.


Worth noting that there have been a few cases of projects patching
OSLO bugs intheir own tree rather than fixing in OSLO then resyncing.
If anybody has any tooling that can detect that, I'd love to see the
results.

I'm generally of the opinion that cinder is likely to be resistant to
more parts of OSLO being used in cinder unless they are a proper
library - syncs have caused us significant pain, code churn, review
load and bugs in the last 12 months. I am but one voice among many,
but I know I'm not the only member of core who feels this to be the
case. Hopefully I can spend some time with OSLO core at the summit and
discuss the problems I've found.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-14 Thread Duncan Thomas
On 14 March 2014 03:07, sxmatch sxmatch1...@gmail.com wrote:

 So, if we delete volume really, just keep snapshot alive, is it possible?
 User don't want to use this volume at now, he can take a snapshot and then
 delete volume.

 If he want it again, can create volume from this snapshot.

 Any ideas?

This has been discussed in various cinder meetings and summits
multiple times. The end answer is 'no, we don't support that. If you
want to keep the snapshot, you need to keep the volume too'.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-14 Thread Duncan Thomas
On 7 March 2014 08:17, Yuzhou (C) vitas.yuz...@huawei.com wrote:
 First, generally, in public or private cloud, the end users of VMs 
 have no right to create new VMs directly.
 If someone want to create new VMs, he or she need to wait for approval 
 process.
 Then, the administrator Of cloud create a new VM to applicant. So the 
 workflow that you suggested is not convenient.

This approval process  admin action is the exact opposite to what
cloud is all about. I'd suggest that anybody using such a process has
little understanding of cloud and should be educated, not weird
interfaces added to nova to support a broken premise. The cloud /is
different/ from traditional IT, that is its strength, and we should be
wary of undermining that to allow old-style thinking to continue.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-14 Thread Roman Podoliaka
Hi all,

 Worth noting that there have been a few cases of projects patching OSLO 
 bugs intheir own tree rather than fixing in OSLO then resyncing. If anybody 
 has any tooling that can detect that, I'd love to see the results.

They shouldn't have done that :(

I totally agree, that 'syncing from incubator' strategy of reusing
common code isn't pretty, but this is what we have now. And oslo team
has been working hard to graduate libraries from incubator and then
reuse them in target projects as any other 3rd party libraries.
Hopefully, we'll no longer need to sync code from incubator soon.

Thanks,
Roman


On Fri, Mar 14, 2014 at 9:48 AM, Duncan Thomas duncan.tho...@gmail.com wrote:
 On 13 March 2014 21:13, Roman Podoliaka rpodoly...@mirantis.com wrote:
 Hi Steven,

 Code from openstack/common/ dir is 'synced' from oslo-incubator. The
 'sync' is effectively a copy of oslo-incubator subtree into a project
 source tree. As syncs are not done at the same time, the code of
 synced modules may indeed by different for each project depending on
 which commit of oslo-incubator was synced.


 Worth noting that there have been a few cases of projects patching
 OSLO bugs intheir own tree rather than fixing in OSLO then resyncing.
 If anybody has any tooling that can detect that, I'd love to see the
 results.

 I'm generally of the opinion that cinder is likely to be resistant to
 more parts of OSLO being used in cinder unless they are a proper
 library - syncs have caused us significant pain, code churn, review
 load and bugs in the last 12 months. I am but one voice among many,
 but I know I'm not the only member of core who feels this to be the
 case. Hopefully I can spend some time with OSLO core at the summit and
 discuss the problems I've found.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-14 Thread Joshua Harlow
Sure, I can try to help,

I started https://etherpad.openstack.org/p/taskflow-mistral so that we can all 
work on this.

Although I'd rather not make architecture for mistral (cause that doesn't seem 
like an appropriate thing to do, for me to tell mistral what to do with its 
architecture), but I'm all for working on it together as a community (instead 
of me producing something that likely won't have much value).

Let us work on the above etherpad together and hopefully get some good ideas 
flowing :-)

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, March 14, 2014 at 12:11 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

Joshua,

why wait? Why not just help Renat with his research on that integration and 
bring your own vision to the table? Write some 1-page architecture description 
on how Mistral can be built on top of TaskFlow and we discuss pros and cons. In 
would be much more productive.


On Fri, Mar 14, 2014 at 11:35 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
Thanks Renat,

I'll keep waiting, and hoping that we can figure this out for everyone's 
benefit. Because in the end we are all much stronger working together and much 
weaker when not.

Sent from my really tiny device...

On Mar 13, 2014, at 11:41 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:

Folks,

Mistral and TaskFlow are significantly different technologies. With different 
set of capabilities, with different target audience.

We may not be doing enough to clarify all the differences, I admit that. The 
challenge here is that people tend to judge having minimal amount of 
information about both things. As always, devil in the details. Stan is 100% 
right, “seems” is not an appropriate word here. Java seems to be similar to C++ 
at the first glance for those who have little or no knowledge about them.

To be more consistent I won’t be providing all the general considerations that 
I’ve been using so far (in etherpads, MLs, in personal discussions), it doesn’t 
seem to be working well, at least not with everyone. So to make it better, like 
I said in that different thread: we’re evaluating TaskFlow now and will share 
the results. Basically, it’s what Boris said about what could and could not be 
implemented in TaskFlow. But since the very beginning of the project I never 
abandoned the idea of using TaskFlow some day when it’s possible.

So, again: Joshua, we hear you, we’re working in that direction.


I'm reminded of
http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
k/2http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-track/2
 where it seemed like we were doing much better collaboration, what has
happened to break this continuity?

Not sure why you think something is broken. We just want to finish the pilot 
with all the ‘must’ things working in it. This is a plan. Then we can revisit 
and change absolutely everything. Remember, to the great extent this is 
research. Joshua, this is what we talked about and agreed on many times. I know 
you might be anxious about that given the fact it’s taking more time than 
planned but our vision of the project has drastically evolved and gone far far 
beyond the initial Convection proposal. So the initial idea of POC is no longer 
relevant. Even though we finished the first version in December, we realized it 
wasn’t something that should have been shared with the community since it 
lacked some essential things.


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.comhttp://www.mirantis.com/
sla...@mirantis.commailto:sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] libvirt+Xen+OVS VLAN networking in icehouse

2014-03-14 Thread iain macdonnell
Hi Simon,

Thank you! One of those bugs helped me focus on an area of the (nova)
code that I hadn't found my way to yet, and I was able to at least get
back the functionality I had with Havana (hybrid OVS+Ethernet-Bridge
model) by setting firewall_driver in nova.conf to
nova.virt.libvirt.firewall.IptablesFirewallDriver instead of
nova.virt.firewall.NoopFirewallDriver - now I can launch an instance
with functional networking again.

I'd still like to understand how the OVS port is supposed to get setup
when the non-hybrid model is used, and eliminate the ethernet bridge
if possible. I'll dig into that a bit more...

~iain



On Fri, Mar 14, 2014 at 3:01 AM, Simon Pasquier simon.pasqu...@bull.net wrote:
 Hi,

 I've played a little with XenAPI + OVS. You might be interested by this
 bug report [1] that describes a related problem I've seen in this
 configuration. I'm not sure about Xen libvirt though. My assumption is
 that the future-proof solution for using Xen with OpenStack is the
 XenAPI driver but someone from Citrix (Bob?) may confirm.

 Note also that the security groups are currently broken with libvirt +
 OVS. As you noted, the iptables rules are applied directly to the OVS
 port thus they are not effective (see [2] for details). There's work in
 progress [3][4] to fix this critical issue. As far as the XenAPI driver
 is concerned, there is another bug [5] tracking the lack of support for
 security groups which should be addressed by the OVS firewall driver [6].

 HTH,

 Simon

 [1] https://bugs.launchpad.net/neutron/+bug/1268955
 [2] https://bugs.launchpad.net/nova/+bug/1112912
 [3] https://review.openstack.org/21946
 [4] https://review.openstack.org/44596
 [5] https://bugs.launchpad.net/neutron/+bug/1245809
 [6] https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

 On 13/03/2014 19:35, iain macdonnell wrote:
 I've been playing with an icehouse build grabbed from fedorapeople. My
 hypervisor platform is libvirt-xen, which I understand may be
 deprecated for icehouse(?) but I'm stuck with it for now, and I'm
 using VLAN networking. It almost works, but I have a problem with
 networking. In havana, the VIF gets placed on a legacy ethernet
 bridge, and a veth pair connects that to the OVS integration bridge.
 In understand that this was done to enable iptables filtering at the
 VIF. In icehouse, the VIF appears to get placed directly on the
 integration bridge - i.e. the libvirt XML includes something like:

 interface type='bridge'
   mac address='fa:16:3e:e7:1e:c3'/
   source bridge='br-int'/
   script path='vif-bridge'/
   target dev='tap43b9d367-32'/
 /interface


 The problem is that the port on br-int does not have the VLAN tag.
 i.e. I'll see something like:

 Bridge br-int
 Port tap43b9d367-32
 Interface tap43b9d367-32
 Port qr-cac87198-df
 tag: 1
 Interface qr-cac87198-df
 type: internal
 Port int-br-bond0
 Interface int-br-bond0
 Port br-int
 Interface br-int
 type: internal
 Port tapb8096c18-cf
 tag: 1
 Interface tapb8096c18-cf
 type: internal


 If I manually set the tag using 'ovs-vsctl set port tap43b9d367-32
 tag=1', traffic starts flowing where it needs to.

 I've traced this back a bit through the agent code, and find that the
 bridge port is ignored by the agent because it does not have any
 external_ids (observed with 'ovs-vsctl list Interface'), and so the
 update process that normally sets the tag is not invoked. It appears
 that Xen is adding the port to the bridge, but nothing is updating it
 with the neutron-specific external_ids that the agent expects to
 see.

 Before I dig any further, I thought I'd ask; is this stuff supposed to
 work at this point? Is it intentional that the VIF is getting placed
 directly on the integration bridge now? Might I be missing something
 in my configuration?

 FWIW, I've tried the ML2 plugin as well as the legacy OVS one, with
 the same result.

 TIA,

 ~iain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-14 Thread Joshua Harlow
So this is an interesting question.

Let me explain why I think exposing is_sync is actually relatively dangerous 
and breaks the task oriented model (at least taskflows view of it).

First let me explain a little bit of what happens in taskflow to create and 
execute things.

1. User X creates objects derived from 
taskhttps://github.com/openstack/taskflow/blob/master/taskflow/task.py#L31 
base class that act as the primitives that execute and revert (nothing here 
specifies that they are synchronous or not, they just do some piece of work and 
return some results and have the ability to undo those pieces of work).
2. User X combines links tasks (and also patterns themselves) together using 
taskflows 
patternshttps://github.com/openstack/taskflow/tree/master/taskflow/patterns 
(currently a small set, but could get bigger as needed) to form a larger set of 
combined work (A - B - C for example), these patterns support nesting so [A] 
in the example can itself expand into a something like (E,F,G tasks) and so on. 
Lets call the result of this linking the Y object. NOTE: At this point there is 
still not is_sync or is_async, since here we are just defining ordering 
constraints that should be enforced at runtime.
3. User gives Y object to 
enginehttps://github.com/openstack/taskflow/tree/master/taskflow/engines in 
taskflow, providing the engine 
persistencehttps://github.com/openstack/taskflow/tree/master/taskflow/persistence
 model/backend it wants to use (for storing intermediate results, for saving 
state changes) and tells the engine to execute(). At this point the engine will 
locate all tasks that have no dependencies on other tasks and start running 
them asynchronously (how this works depends on the engine type selected), at 
this point the tasks that can execute begin executing (and they have the 
potential to signal to others there current progress via the 
update_progresshttps://github.com/openstack/taskflow/blob/master/taskflow/task.py#L78
 method (engines support a concept of 
listenershttps://github.com/openstack/taskflow/tree/master/taskflow/listeners 
that can be attached to engines to allow external entities to be informed of 
progress updates, state changes…). This process repeats until the workflow has 
completed or it fails (in which case revert() methods start to be called and 
each task is given a chance to undo whatever it has created, in the near future 
there will be aways to alter how this reversion happens with a concept of 
retry_controllershttps://review.openstack.org/#/c/71621/).

TLDR: So in general u could say that all tasks are unaware that they are 
running async/sync and in the above model it is up to the engine type to 
determine how things are ran (since the engine is the controller that actually 
runs all tasks, making sure the the ordering constraints established in step #2 
are retained).

To me this kind of disconnection (not allowing a task to specify it's 
async/sync) is useful and helps retain sanity. In a way it seems against the 
task model to have tasks provide this kind of information (leaky abstraction…) 
and at least in the taskflow view makes tasks have to much control over how 
they are executed (why should a task care?). It also completely alters the 
state diagramhttps://wiki.openstack.org/wiki/File:Tf_task_state_diagram.png 
that is persisted and used for resumption when this information is allowed to 
be specified by a task. What does it mean for a task to be RUNNING but have the 
task continues to run asynchronously? In a way isn't this already what the 
engine (at least in taskflow) is doing internally anyway (running everything 
asynchronously)? At least in taskflow all task are already running with 
executorshttps://github.com/openstack/taskflow/blob/master/taskflow/engines/action_engine/executor.py
 that return future/s that are waited upon (dependent tasks can not run until 
the future of a predecessor task has completed).

-Josh

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, March 14, 2014 at 5:09 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Actions design BP

- is_sync() - consider using an attribute instead -  @mistral.async

Well, I had an idea that it may depend on how a particular action instance is 
parameterized (in other words, a dynamic thing rather than static property). It 
would just give more flexibility

- can we think of a way to unify sync and async actions from engine's 
standpoint? So that we don't special-case it in the engine?

To be precise, engine has no knowledge about it. Only executor does and it has 
to but the difference is pretty small. In case if action is sync it should just 
call the API method 

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-14 Thread Mike Wilson
+1 to what Jay says here. This hidden behavior moistly just causes problems
and allows hacking hidden ways to restore things.

-Mike


On Fri, Mar 14, 2014 at 9:55 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-03-14 at 08:37 +0100, Radomir Dopieralski wrote:
  Hello,
 
  I also think that this thread is going in the wrong direction, but I
  don't think the direction Boris wants is the correct one either. Frankly
  I'm a little surprised that nobody mentioned another advantage that soft
  delete gives us, the one that I think it was actually used for
 originally.
 
  You see, soft delete is an optimization. It's there to make the system
  work faster as a whole, have less code and be simpler to maintain and
 debug.
 
  How does it do it, when, as clearly shown in the first post in this
  thread, it makes the queries slower, requires additional indices in the
  database and more logic in the queries?

 I feel it isn't an optimization if:

 * It slows down the code base
 * Makes the code harder to read and understand
 * Deliberately obscures the actions of removing and restoring resources
 * Encourages the idea that everything in the system is undoable, like
 the cloud is a Word doc.

   The answer is, by doing more
  with those queries, by making you write less code, execute fewer queries
  to the databases and avoid duplicating the same data in multiple places.

 Fewer queries does not aklways make faster code, nor does it lead to
 inherently race-free code.

  OpenStack is a big, distributed system of multiple databases that
  sometimes rely on each other and cross-reference their records. It's not
  uncommon to have some long-running operation started, that uses some
  data, and then, in the middle of its execution, have that data deleted.
  With soft delete, that's not a problem -- the operation can continue
  safely and proceed as scheduled, with the data it was started with in
  the first place -- it still has access to the deleted records as if
  nothing happened.

 I believe a better solution would be to use Boris' solution and
 implement safeguards around the delete operation. For instance, not
 being able to delete an instance that has tasks still running against
 it. Either that, or implement true task abortion logic that can notify
 distributed components about the need to stop a running task because
 either the user wants to delete a resource or simply cancel the
 operation they began.

   You simply won't be able to schedule another operation
  like that with the same data, because it has been soft-deleted and won't
  pass the validation at the beginning (or even won't appear in the UI or
  CLI). This solves a lot of race conditions, error handling, additional
  checks to make sure the record still exists, etc.

 Sorry, I disagree here. Components that rely on the soft-delete behavior
 to get the resource data from the database should instead respond to a
 NotFound that gets raised by aborting their running task.

  Without soft delete, you need to write custom code every time to handle
  the case of a record being deleted mid-operation, including all the
  possible combinations of which record and when.

 Not custom code. Explicit code paths for explicit actions.

   Or you need to copy all
  the relevant data in advance over to whatever is executing that
  operation.

 This is already happening.

  This cannot be abstracted away entirely (although tools like
  TaskFlow help), as this is specific to the case you are handling. And
  it's not easy to find all the places where you can have a race condition
  like that -- especially when you are modifying existing code that has
  been relying on soft delete before. You can have bugs undetected for
  years, that only appear in production, on very large deployments, and
  are impossible to reproduce reliably.
 
  There are more similar cases like that, including cascading deletes and
  more advanced stuff, but I think this single case already shows that
  the advantages of soft delete out-weight its disadvantages.

 I respectfully disagree :) I think the benefits of explicit code paths
 and increased performance of the database outweigh the costs of changing
 existing code.

 Best,
 -jay

  On 13/03/14 19:52, Boris Pavlovic wrote:
   Hi all,
  
  
   I would like to fix direction of this thread. Cause it is going in
 wrong
   direction.
  
   To assume:
   1) Yes restoring already deleted recourses could be useful.
   2) Current approach with soft deletion is broken by design and we
 should
   get rid of them.
  
   More about why I think that it is broken:
   1) When you are restoring some resource you should restore N records
   from N tables (e.g. VM)
   2) Restoring sometimes means not only restoring DB records.
   3) Not all resources should be restorable (e.g. why I need to restore
   fixed_ip? or key-pairs?)
  
  
   So what we should think about is:
   1) How to implement restoring functionally in common way (e.g.
 framework
   

Re: [openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 14-00 UTC

2014-03-14 Thread Eugene Nikanorov
Meeting minutes:

Minutes:
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/neutron_lbaas.2014-03-13-14.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/neutron_lbaas.2014-03-13-14.00.txt
Log:
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/neutron_lbaas.2014-03-13-14.00.log.html

Thanks,
Eugene.


On Thu, Mar 13, 2014 at 5:33 AM, Samuel Bercovici samu...@radware.comwrote:

  Hi Eugene,



 I am with Evgeny on a business trip so we will not be able to join this
 time.

 I have not seen any progress on the model side. Did I miss anything?

 Will look for the meeting summary



 Regards,

 -Sam.





 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Wednesday, March 12, 2014 10:21 PM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday,
 14-00 UTC



 Hi neutron and lbaas folks,



 Let's keep our regular meeting on Thursday, at 14-00 UTC at
 #openstack-meeting



 We'll update on current status and continue object model discussion.

 We have many new folks that are recently showed the interest in lbaas
 project asking for mini summit. I think it would be helpful for everyone
 interested in lbaas to join the meeting.



 Thanks,

 Eugene.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-14 Thread Joshua Harlow
Off topic but, I'd like to see a word doc written out with the history of the 
cloud, that'd be pretty sweet.

Especially if its something like google docs where u can watch the changes 
happen in realtime.

+2

From: Jay Pipes jaypi...@gmail.commailto:jaypi...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, March 14, 2014 at 7:55 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
deletion (step by step)

On Fri, 2014-03-14 at 08:37 +0100, Radomir Dopieralski wrote:
Hello,
I also think that this thread is going in the wrong direction, but I
don't think the direction Boris wants is the correct one either. Frankly
I'm a little surprised that nobody mentioned another advantage that soft
delete gives us, the one that I think it was actually used for originally.
You see, soft delete is an optimization. It's there to make the system
work faster as a whole, have less code and be simpler to maintain and debug.
How does it do it, when, as clearly shown in the first post in this
thread, it makes the queries slower, requires additional indices in the
database and more logic in the queries?

I feel it isn't an optimization if:

* It slows down the code base
* Makes the code harder to read and understand
* Deliberately obscures the actions of removing and restoring resources
* Encourages the idea that everything in the system is undoable, like
the cloud is a Word doc.

  The answer is, by doing more
with those queries, by making you write less code, execute fewer queries
to the databases and avoid duplicating the same data in multiple places.

Fewer queries does not aklways make faster code, nor does it lead to
inherently race-free code.

OpenStack is a big, distributed system of multiple databases that
sometimes rely on each other and cross-reference their records. It's not
uncommon to have some long-running operation started, that uses some
data, and then, in the middle of its execution, have that data deleted.
With soft delete, that's not a problem -- the operation can continue
safely and proceed as scheduled, with the data it was started with in
the first place -- it still has access to the deleted records as if
nothing happened.

I believe a better solution would be to use Boris' solution and
implement safeguards around the delete operation. For instance, not
being able to delete an instance that has tasks still running against
it. Either that, or implement true task abortion logic that can notify
distributed components about the need to stop a running task because
either the user wants to delete a resource or simply cancel the
operation they began.

  You simply won't be able to schedule another operation
like that with the same data, because it has been soft-deleted and won't
pass the validation at the beginning (or even won't appear in the UI or
CLI). This solves a lot of race conditions, error handling, additional
checks to make sure the record still exists, etc.

Sorry, I disagree here. Components that rely on the soft-delete behavior
to get the resource data from the database should instead respond to a
NotFound that gets raised by aborting their running task.

Without soft delete, you need to write custom code every time to handle
the case of a record being deleted mid-operation, including all the
possible combinations of which record and when.

Not custom code. Explicit code paths for explicit actions.

  Or you need to copy all
the relevant data in advance over to whatever is executing that
operation.

This is already happening.

This cannot be abstracted away entirely (although tools like
TaskFlow help), as this is specific to the case you are handling. And
it's not easy to find all the places where you can have a race condition
like that -- especially when you are modifying existing code that has
been relying on soft delete before. You can have bugs undetected for
years, that only appear in production, on very large deployments, and
are impossible to reproduce reliably.
There are more similar cases like that, including cascading deletes and
more advanced stuff, but I think this single case already shows that
the advantages of soft delete out-weight its disadvantages.

I respectfully disagree :) I think the benefits of explicit code paths
and increased performance of the database outweigh the costs of changing
existing code.

Best,
-jay

On 13/03/14 19:52, Boris Pavlovic wrote:
 Hi all,


 I would like to fix direction of this thread. Cause it is going in wrong
 direction.

 To assume:
 1) Yes restoring already deleted recourses could be useful.
 2) Current approach with soft deletion is broken by design and we should
 get rid of them.

 More about why I think that it is broken:
 1) When you are 

Re: [openstack-dev] [I18n][Horizon] I18n compliance test string freeze exception

2014-03-14 Thread Akihiro Motoki
Hi,

I noticed that the criteria proposed is still ambiguous during reviews.
Currently Horizon team deal with mini features as bugs in the
beginning of RC cycle,
and they usually include a few string changes from the nature of Horizon.
The project policy that we accept mini features as bugs needs to match
the policy of string freeze. Otherwise as a result such patches will
be deferred.

It is a balance of when the translation deadline is and how many new
strings are added
(in my experience around 5 string addition/changes).
IMO as long as mini features land in early RC cycle adding a few WORDS
can be accepted.
My proposal is that we have a deadline for merges of mini features
with string updates.
The deadline in my mind is the first two weeks after feature freeze Tuesday.

Thought?

Akihiro

On Thu, Mar 13, 2014 at 9:05 PM, Julie Pichon jpic...@redhat.com wrote:
 On 13/03/14 09:28, Akihiro Motoki wrote:
 +1

 In my understanding String Freeze is a SOFT freeze as Daisy describes.
 Applying string freeze to incorrect or incomprehensible messages is
 not good from UX point of view
 and shipping the release with such strings will make the situation
 worse and people feel OpenStack is not mature
 and can misunderstand OpenStack doesn't care for such detail :-(

 From my experience of working as a translator and bridging Horizon and
 I18N community
 in the previous releases, the proposed policy sounds good and it can
 be accepted by translators.

 That sounds good to me as well. I think we should modify the
 StringFreeze page [1] to reflect this, as it sounds a lot more strict
 than what the translation team actually wishes for.

 Thanks,

 Julie

 [1] https://wiki.openstack.org/wiki/StringFreeze

 Thanks,
 Akihiro


 On Thu, Mar 13, 2014 at 5:41 PM, Ying Chun Guo guoyi...@cn.ibm.com wrote:
 Hello, all

 Our translators start translation and I18n compliance test since string
 frozen date.
 During the translation and test, we may report bugs.
 Some bugs are incorrect and incomprehensible messages.
 Some bugs are user facing messages but not marked with _().
 All of these bugs might introduce string changes and add new strings to be
 translated.
 I noticed some patches to fix these bugs got -1 because of string freeze.
 For example, https://review.openstack.org/#/c/79679/
 and https://review.openstack.org/#/c/79948/

 StringFreeze - Start translation  test - Report bugs which may cause
 string changes - Cannot fix these bugs because of StringFreeze.
 So I'd like to bring this question to dev: when shall we fix these errors
 then?

 From my point of view, FeatureFreeze means not accept new features, and
 doesn't mean cannot fix bugs in features.
 StringFreeze should mean not to add new strings. But we could be able to
 improve strings and fix bugs.
 I think shipping with incorrect messages is worse than strict string freeze.

 From my experiences in Havana release, since StringFreeze, there are
 volunteers from Horizon team who would
 keep an eye on strings changes. If string changes happen, they would send
 email
 to I18n ML to inform these changes. Many thanks to their work.
 In Havana release, we kept on those kind of bug reporting and fixing till
 RC2.
 Most of them are happened before RC1.

 Now I hope to hear your input to this situation: when and how should we fix
 these kind of bugs in Icehouse?

 Best regards
 Ying Chun Guo (Daisy)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] [zeromq] nova-rpc-zmq-receiver bottleneck

2014-03-14 Thread Mike Wilson
Hi Yatin,

I'm glad you are thinking about the drawbacks of the zmq-receiver causes, I
want to give you a reason to keep the zmq-receiver and get your feedback.
The way I think about the zmq-receiver is a tiny little mini-broker that
exists separate from any other OpenStack service. As such, it's
implementation can be augmented to support store-and-forward and possibly
other messaging behaviors that are desirable for ceilometer currently and
possibly other things in the future. Integrating the receiver into each
service is going to remove its independence and black box nature and give
it all the bugs and quirks of any project it gets lumped in with. I would
prefer that we continue to improve zmq-receiver to overcome the tough
parts. Either that or find a good replacement and use that. An example of a
possible replacement might be the qpid dispatch router[1], although this
guy explicitly wants to avoid any store and forward behaviors. Of course,
dispatch router is going to be tied to qpid, I just wanted to give an
example of something with similar functionality.

-Mike


On Thu, Mar 13, 2014 at 11:36 AM, yatin kumbhare yatinkumbh...@gmail.comwrote:

 Hello Folks,

 When zeromq is use as rpc-backend, nova-rpc-zmq-receiver service needs
 to be run on every node.

 zmq-receiver receives messages on tcp://*:9501 with socket type PULL and
 based on topic-name (which is extracted from received data), it forwards
 data to respective local services, over IPC protocol.

 While, openstack services, listen/bind on IPC socket with socket-type
 PULL.

 I see, zmq-receiver as a bottleneck and overhead as per the current
 design.
 1. if this service crashes: communication lost.
 2. overhead of running this extra service on every nodes, which just
 forward messages as is.


 I'm looking forward to, remove zmq-receiver service and enable direct
 communication (nova-* and cinder-*) across and within node.

 I believe, this will create, zmq experience more seamless.

 the communication will change from IPC to zmq TCP socket type for each
 service.

 like: rpc.cast from scheduler -to - compute would be direct rpc message
 passing. no routing through zmq-receiver.

 Now, TCP protocol, all services will bind to unique port (port-range could
 be, 9501-9510)

 from nova.conf, rpc_zmq_matchmaker =
 nova.openstack.common.rpc.matchmaker_ring.MatchMakerRing.

 I have put arbitrary ports numbers after the service name.

 file:///etc/oslo/matchmaker_ring.json

 {
  cert:9507: [
  controller
  ],
  cinder-scheduler:9508: [
  controller
  ],
  cinder-volume:9509: [
  controller
  ],
  compute:9501: [
  controller,computenodex
  ],
  conductor:9502: [
  controller
  ],
  consoleauth:9503: [
  controller
  ],
  network:9504: [
  controller,computenodex
  ],
  scheduler:9506: [
  controller
  ],
  zmq_replies:9510: [
  controller,computenodex
  ]
  }

 Here, the json file would keep track of ports for each services.

 Looking forward to seek community feedback on this idea.


 Regards,
 Yatin


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: Neutron L3-DVR F2F Discussion - Follow up L3-Agent Design Doc

2014-03-14 Thread Smith, Michael (HPN RD)
All,

As requested at the F2F we’ve created a design doc to cover changes to the 
L3-Agent.  We have already sent out the L2-Agent doc for review and now we are 
providing the L3-Agent doc.Please provide your review comments.  See below 
for a link to the google doc page.

https://docs.google.com/document/d/1jCmraZGirmXq5V1MtRqhjdZCbUfiwBhRkUjDXGt5QUQ/edit

Yours,

Michael Smith
Hewlett-Packard Company
HP Networking RD
8000 Foothills Blvd. M/S 5557
Roseville, CA 95747
Ph: 916 785-0918
Fax: 916 785-1199


_
From: Vasudevan, Swaminathan (PNB Roseville)
Sent: Monday, February 17, 2014 8:48 AM
To: Baldwin, Carl (HPCS Neutron); 
sylvain.afch...@enovance.commailto:sylvain.afch...@enovance.com; James Clark, 
(CES BU) (james.cl...@kt.commailto:james.cl...@kt.com); sumit naiksatam 
(sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com); Nachi Ueno 
(na...@ntti3.commailto:na...@ntti3.com); Kyle Mestery 
(mest...@siliconloons.commailto:mest...@siliconloons.com); enikanorov 
(enikano...@mirantis.commailto:enikano...@mirantis.com); Assaf Muller 
(amul...@redhat.commailto:amul...@redhat.com); 
cloudbe...@gmail.commailto:cloudbe...@gmail.com; OpenStack Development 
Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org); 
'mmccl...@yahoo-inc.com'; Hemanth Ravi 
(hemanth.r...@oneconvergence.commailto:hemanth.r...@oneconvergence.com); 
Grover, Rajeev; Smith, Michael (HPN RD); Narasimhan, Vivekanandan; Birru, 
Murali Yadav
Cc: 'Sahdev P Zala'; 'Michael S Ly'; 'kba...@redhat.com'; 'Donaldson, 
Jonathan'; 'Kiran.Makhijani'; 'Murali Allada'; Rouault, Jason (HP Cloud); 
Atwood, Mark; 'Rajesh Ramchandani'; 'Miguel Angel Ajo Pelayo'; 'CARVER, PAUL'; 
'Geraint North'; 'Kristen Wright (schaffk)'; 'Srinivas R Brahmaroutu'; 'Fei 
Long Wang'; 'Marcio A Silva'; Clark, Robert Graham; 'Dugger, Donald D'; Walls, 
Jeffrey Joel (Cloud OS RD); Kant, Arun; Pratt, Gavin; 
ravi...@gmail.commailto:ravi...@gmail.com; Shurtleff, Dave; 
'steven.l...@hgst.com'; 'Ryan Hsu'; 'Jesse Noller'; 'David Kranz'; 'Shekar 
Nagesh'; 'Maciocco, Christian'; 'Yanick DELARBRE'; 'Brian Emord'; 'Edmund 
Troche'; 'Gabriel Hurley'; 'James Carey'; Palecanda, Ponnappa; 'Bill Owen'; 
Millward, Scott T (HP Networking / CEB- Roseville); 'Michael Factor'; 'Mohammad 
Banikazemi'; 'Octavian Ciuhandu'; 'Dagan Gilat'; 'Kodam, Vijayakumar (EXT-Tata 
Consultancy Ser - FI/Espoo)'; 'Linda Mellor'; 'LELOUP Julien'; 'Jim Fehlig'; 
'Stefan Hellkvist'; Carlino, Chuck; 'David Peraza'; 'Shiv Haris'; 'Lei Lei 
Zhou'; 'Zuniga, Israel'; 'Ed Hall'; Modi, Prashant; '공용준(Next Gen)'; 'David 
Lai'; 'Murali Allada'; 'Daryl Walleck'; 'Robert Craig'; Nguyen, Hoa (Palo 
Alto); 'Gardiner, Mike'; '안재석(Cloud코어개발2팀)'; Johnson, Anita (Exec 
Asst:SDN-Roseville); Hobbs, Jeannie (HPN Executive Assistant); 'Abby Sohal 
(aksohal)'; 'Tim Serong'; 'greg_jac...@dell.com'; 'Hathaway.Jon'; 'Robbie 
Gill'; Griswold, Joshua; Arunachalam, Yogalakshmi (HPCC Cloud OS); Keith Burns 
(alagalah); Assaf Muller; William Henry; Manish Godara
Subject: RE: Neutron L3-DVR F2F Discussion - Conference Room Updated - 
Directions Attached


Hi Folks,
Thanks for attending the Neutron L3-DVR F2F discussion last week and thanks for 
all your feedback.
Here is the link to the slides that I presented during our meeting.

https://docs.google.com/presentation/d/1XJY30ZM0K3xz1U4CVWaQuaqBotGadAFM1xl1UuLtbrk/edit#slide=id.p

Here are the meeting notes.


1.  DVR Team updated the OpenStack Community on what has changed from the 
earlier proposal.
a.  No kernel Module
b.  Use existing namespaces
c.  Split L3, Floating IP and default External Access.
d.  Provide migration Path
e.  Supporting both East-West and North-South.
2.  Got a clear direction from the PTL that we don’t need to address the 
distributed SNAT at this point of time and focus on the centralized solution 
that we proposed.
3.  The DVR Agent design (both L2 and L3) should  be discussed with the 
respective teams before we proceed. Probably a separate document or blueprint 
that discusses the flows.
4.  No support for Dynamic routing protocols.
5.  Discussed both active blueprints.
6.  Community suggested that we need to consider or check if the OVS ARP 
responder can be utilized. ( Proposed by Eduard, working on it).
7.  HA for the Centralized Service Node.

Thanks
Swami


-Original Appointment-
From: Vasudevan, Swaminathan (PNB Roseville)
Sent: Wednesday, February 05, 2014 10:02 AM
To: Vasudevan, Swaminathan (PNB Roseville); Baldwin, Carl (HPCS Neutron); 
sylvain.afch...@enovance.commailto:sylvain.afch...@enovance.com; James Clark, 
(CES BU) (james.cl...@kt.commailto:james.cl...@kt.com); sumit naiksatam 
(sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com); Nachi Ueno 
(na...@ntti3.commailto:na...@ntti3.com); Kyle Mestery 
(mest...@siliconloons.commailto:mest...@siliconloons.com); enikanorov 

Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-14 Thread Jay S Bryant
From:   Duncan Thomas duncan.tho...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   03/14/2014 11:49 AM
Subject:Re: [openstack-dev] [Oslo] Improving oslo-incubator 
update.py



On 14 March 2014 16:10, Jay S Bryant jsbry...@us.ibm.com wrote:
 -- Duncan, It is important to know what commits are being brought over 
to
 help provide a pointer to
 -- the possible cause of subsequent bugs that arise.  I.E. if we sync 
up
 the DB, there is a commit for fixing
 -- db connection order and suddenly we are getting intermittent DB
 connection failures, it give us
 -- a starting point to fixing the issue.


Jay, there's been a mix-up in who's saying what here. I *very much*
want to know what commits are being bought over. For slightly
different reasons (I'm mostly wanting them for easy review, you for
bug fixing). Brant is suggesting that just the last commit ID is
enough, which I disagree with (and will continue to hit -1/-2 for).

If somebody was to improve the import script to do this automatically
that would be great. Currently I can't see an easy way of
programatically telling when the last import was - I'll take another
look at the problem if somebody smarter than me doesn't sort it first

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Duncan,

Thanks for clarifying!  I was quite confused as I thought you were in
favor of the details in the commit messages.  Sorry for confusing who was 
carrying what stance forward.

Brant,

I agree with Duncan.  I think that having details of the commits that
are being pulled in with a sync commit is important.  I realize that the
user probably isn't going to do a perfect job, but it certainly helps 
to give context to the changes being made and provides important bread 
crumbs
for debug.

It would be great if we could get the process for this automated.  In the 
mean time, those of us doing the syncs will just have to slog through the
process.

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-14 Thread Jay S Bryant
From:   Duncan Thomas duncan.tho...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   03/14/2014 11:56 AM
Subject:Re: [openstack-dev] Duplicate code for processing REST 
APIs



On 13 March 2014 21:13, Roman Podoliaka rpodoly...@mirantis.com wrote:
 Hi Steven,

 Code from openstack/common/ dir is 'synced' from oslo-incubator. The
 'sync' is effectively a copy of oslo-incubator subtree into a project
 source tree. As syncs are not done at the same time, the code of
 synced modules may indeed by different for each project depending on
 which commit of oslo-incubator was synced.


Worth noting that there have been a few cases of projects patching
OSLO bugs intheir own tree rather than fixing in OSLO then resyncing.
If anybody has any tooling that can detect that, I'd love to see the
results.

I'm generally of the opinion that cinder is likely to be resistant to
more parts of OSLO being used in cinder unless they are a proper
library - syncs have caused us significant pain, code churn, review
load and bugs in the last 12 months. I am but one voice among many,
but I know I'm not the only member of core who feels this to be the
case. Hopefully I can spend some time with OSLO core at the summit and
discuss the problems I've found.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Duncan,

I will come with you for that discussion.  :-)  Have some thoughts and 
questions to share as well.

Regardless, I think we need to make sure to actually get our Oslo syncs 
started for Cinder early
in Juno.  We are way behind on db and db.sqlalchemy.  Planning to propose 
changes to that as soon
as we switch over. 

Jay___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-14 Thread racha
Hi,
  Is it possible (in the latest upstream) to partition the same
integration bridge br-int into multiple isolated partitions (in terms of
lvids ranges, patch ports, etc.) between OVS mechanism driver and ODL
mechanism driver? And then how can we pass some details to Neutron API (as
in the provider segmentation type/id/etc) so that ML2 assigns a mechanism
driver to the virtual network? The other alternative I guess is to create
another integration bridge managed by a different Neutron instance? Probably
I am missing something.

Best Regards,
Racha


On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti nader.laho...@gmail.comwrote:

 1) Does it mean an interim solution is to have our own plugin (and have
 all the changes in it) and declare it as core_plugin instead of Ml2Plugin?

 2) The other issue as I mentioned before, is that the extension(s) is not
 showing up in the result, for instance when create_network is called
 [*result = super(Ml2Plugin, self).create_network(context, network)]*, and
 as a result they cannot be used in the mechanism drivers when needed.

 Looks like the process_extensions is disabled when fix for Bug 1201957
 committed and here is the change:
 Any idea why it is disabled?

 --
 Avoid performing extra query for fetching port security binding

 Bug 1201957


 Add a relationship performing eager load in Port and Network

 models, thus preventing the 'extend' function from performing

 an extra database query.

 Also fixes a comment in securitygroups_db.py


 Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

  master   h.1

 …

  2013.2

 commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

 Salvatore Orlando salv-orlando authored 8 months ago


 2  neutron/db/db_base_plugin_v2.py View

  @@ -995,7 +995,7 @@ def create_network(self, context, network):

 995   'status': constants.NET_STATUS_ACTIVE}

 996   network = models_v2.Network(**args)

 997   context.session.add(network)

 *998 -return self._make_network_dict(network)*

 *998 +return self._make_network_dict(network,
 process_extensions=False)*

 999

 1000  def update_network(self, context, id, network):

 1001

  n = network['network']

 ---


 Regards,
 Nader.





 On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura kuk...@noironetworks.comwrote:


 On 3/7/14, 3:53 AM, Édouard Thuleau wrote:

 Yes, that sounds good to be able to load extensions from a mechanism
 driver.

 But another problem I think we have with ML2 plugin is the list
 extensions supported by default [1].
 The extensions should only load by MD and the ML2 plugin should only
 implement the Neutron core API.


 Keep in mind that ML2 supports multiple MDs simultaneously, so no single
 MD can really control what set of extensions are active. Drivers need to be
 able to load private extensions that only pertain to that driver, but we
 also need to be able to share common extensions across subsets of drivers.
 Furthermore, the semantics of the extensions need to be correct in the face
 of multiple co-existing drivers, some of which know about the extension,
 and some of which don't. Getting this properly defined and implemented
 seems like a good goal for juno.

 -Bob



  Any though ?
 Édouard.

  [1]
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



 On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi,

 I think it is better to continue the discussion here. It is a good log
 :-)

 Eugine and I talked the related topic to allow drivers to load
 extensions)  in Icehouse Summit
 but I could not have enough time to work on it during Icehouse.
 I am still interested in implementing it and will register a blueprint
 on it.

 etherpad in icehouse summit has baseline thought on how to achieve it.
 https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
 I hope it is a good start point of the discussion.

 Thanks,
 Akihiro

 On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti nader.laho...@gmail.com
 wrote:
  Hi Kyle,
 
  Just wanted to clarify: Should I continue using this mailing list to
 post my
  question/concerns about ML2? Please advise.
 
  Thanks,
  Nader.
 
 
 
  On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery 
 mest...@noironetworks.com
  wrote:
 
  Thanks Edgar, I think this is the appropriate place to continue this
  discussion.
 
 
  On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana emag...@plumgrid.com
 wrote:
 
  Nader,
 
  I would encourage you to first discuss the possible extension with
 the
  ML2 team. Rober and Kyle are leading this effort and they have a IRC
 meeting
  every week:
 
 https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
 
  Bring your concerns on this meeting and get the right feedback.
 
  Thanks,
 
  Edgar
 
  From: Nader Lahouti nader.laho...@gmail.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  Date: Thursday, March 6, 2014 12:14 PM
  To: OpenStack List 

Re: [openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-14 Thread Anna A Sortland
Hi Nathan, 
I agree that we should be using filters on LDAP search whenever possible. 
I still believe it is a valid use case for a new 'roles' API to retrieve 
users with role grants and I am not sure it is possible to do without 
querying LDAP for one user at a time. 


Anna Sortland
Cloud Systems Software Development
IBM Rochester, MN
annas...@us.ibm.com






From:   Nathan Kinder nkin...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   03/13/2014 09:50 PM
Subject:Re: [openstack-dev] [keystone] All LDAP users returned 
using keystone v3/users API



On 03/13/2014 08:36 AM, Anna A Sortland wrote:
 [A] The current keystone LDAP community driver returns all users that
 exist in LDAP via the API call v3/users, instead of returning just users
 that have role grants (similar processing is true for groups). This
 could potentially be a very large number of users. We have seen large
 companies with LDAP servers containing hundreds and thousands of users.
 We are aware of the filters available in keystone.conf
 ([ldap].user_filter and [ldap].query_scope) to cut down on the number of
 results, but they do not provide sufficient filtering (for example, it
 is not possible to set user_filter to members of certain known groups
 for OpenLDAP without creating a memberOf overlay on the LDAP server).
 
 [Nathan Kinder] What attributes would you filter on?  It seems to me
 that LDAP would need to have knowledge of the roles to be able to filter
 based on the roles.  This is not necessarily the case, as identity and
 assignment can be split in Keystone such that identity is in LDAP and
 role assignment is in SQL.  I believe it was designed this way to deal
 with deployments
 where LDAP already exists and there is no need (or possibility) of
 adding role info into LDAP.
 
 [A] That's our main use case. The users and groups are in LDAP and role
 assignments are in SQL.
 You would filter on role grants and this information is in SQL backend.
 So new API would need to query both identity and assignment drivers.
 
 [Nathan Kinder] Without filtering based on a role attribute in LDAP, I
 don't think that there is a good solution if you have OpenStack and
 non-OpenStack users mixed in the same container in LDAP.
 If you want to first find all of the users that have a role assigned to
 them in the assignments backend, then pull their information from LDAP,
 I think that you will end up with one LDAP search operation per user.
 This also isn't a very scalable solution.
 
 [A] What was the reason the LDAP driver was written this way, instead of
 returning just the users that have OpenStack-known roles? Was the
 creation of a separate API for this function considered?
 Are other exploiters of OpenStack (or users of Horizon) experiencing
 this issue? If so, what was their approach to overcome this issue? We
 have been prototyping a keystone extension that provides an API that
 provides this filtering capability, but it seems like a function that
 should be generally available in keystone.
 
 [Nathan Kinder] I'm curious to know how your prototype is looking to
 handle this.
 
 [A] The prototype basically first calls assignment API
 list_role_assignments() to get a list of users and groups with role
 grants. It then iterates the retrieved list and calls identity API
 list_users_in_group() to get the list of users in these groups with
 grants and get_user() to get users that have role grants but do not
 belong to the groups with role grants (a call for each user). Both calls
 ignore groups and users that are not found in the LDAP registry but
 exist in SQL (this could be the result of a user or group being removed
 from LDAP, but the corresponding role grant was not revoked). Then the
 code removes duplicates if any and returns the combined list.

My main concern about this is that you have a single LDAP search
operation per user.  This will get you the correct results, but it isn't
very efficient for the LDAP server if you have a large number of users.
 Performing a single LDAP search operation will perform better if there
is some attribute you can use to filter on, as the connection handling
and operation parsing overhead will be much less.  If you are unable to
add an attribute in LDAP that identifies users that Keystone should list
(such as memberOf), you may not have much choice other than your proposal.

 
 The new extension API is /v3/my_new_extension/users. Maybe the better
 naming would be v3/roles/users (list users with any role) - compare to
 existing v3/roles/​{role_id}​/users  (list users with a specified 
role).
 
 Another alternative that we've tried is just a new identity driver that
 inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides
 just the list_users() function. That's probably not the best approach
 from OpenStack standards point of view but I would like to get
 community's feedback on whether this is 

Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-14 Thread Jay S Bryant
From:   Brant Knudson b...@acm.org
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   03/14/2014 02:21 PM
Subject:Re: [openstack-dev] [Oslo] Improving oslo-incubator 
update.py






On Fri, Mar 14, 2014 at 2:05 PM, Jay S Bryant jsbry...@us.ibm.com wrote:

It would be great if we could get the process for this automated.  In the 
mean time, those of us doing the syncs will just have to slog through the 
process. 

Jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


What's the process? How do I generate the list of changes?

Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Brant,

My process thus far has been the following:

Do the sync to see what files are changed.
Take a look at the last commit sync'd to what is currently in master for a 
file.
Document all the commits that have come in on that file since.
Repeat process for all the relevant files if there is more than one.
If are multiples files I organize the commits with a list of the files 
touched by that commit.
Document the master level of Oslo when the sync was done for reference.

Process may not be perfect, but it gets the job done.  Here is an example 
of the format I use:  https://review.openstack.org/#/c/75740/

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-14 Thread Solly Ross
It would also be great if there was a way to only sync one package.  When 
adding a new library
to a project (e.g. openstack.common.report to Nova), one would want to only 
sync the openstack.common.report
parts, and not the any changes from the rest of openstack.common.  My process 
has been

1. Edit openstack-common.conf to only contain the packages I want
2. Run the update
3. Make sure there wasn't code that didn't get changed from 
'openstack.common.xyz' to 'nova.openstack.common.xyz' (hint: this happens some 
times)
4. git checkout openstack-common.conf to revert the changes to 
openstack-common.conf

IMHO, update.py needs a bit of work (well, I think the whole code copying thing 
needs a bit of work, but that's a different story).

Best Regards,
Solly Ross

- Original Message -
From: Jay S Bryant jsbry...@us.ibm.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Friday, March 14, 2014 3:36:49 PM
Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py




From: Brant Knudson b...@acm.org 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date: 03/14/2014 02:21 PM 
Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py 







On Fri, Mar 14, 2014 at 2:05 PM, Jay S Bryant  jsbry...@us.ibm.com  wrote: 

It would be great if we could get the process for this automated. In the 
mean time, those of us doing the syncs will just have to slog through the 
process. 

Jay 



___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


What's the process? How do I generate the list of changes? 

Brant 
___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


Brant, 

My process thus far has been the following: 


1. Do the sync to see what files are changed. 
2. Take a look at the last commit sync'd to what is currently in master for 
a file. 
3. Document all the commits that have come in on that file since. 
4. Repeat process for all the relevant files if there is more than one. 
5. If are multiples files I organize the commits with a list of the files 
touched by that commit. 
6. Document the master level of Oslo when the sync was done for reference. 

Process may not be perfect, but it gets the job done. Here is an example of the 
format I use: https://review.openstack.org/#/c/75740/ 

Jay 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-14 Thread Solly Ross
You're far from the only one who feels this way.  It should be noted, however, 
that it would appear that the Oslo team is
trying to graduate some more of the modules into separate libraries.  I talked 
to someone about the process, and they said
that the Oslo team is trying to graduate the lowest libraries on the 
dependency chart first, so that any graduated libraries
will have olso.xyz dependencies, and not code-sync dependencies.

Best Regards,
Solly Ross

- Original Message -
From: Jay S Bryant jsbry...@us.ibm.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Friday, March 14, 2014 3:12:22 PM
Subject: Re: [openstack-dev] Duplicate code for processing REST APIs




From: Duncan Thomas duncan.tho...@gmail.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date: 03/14/2014 11:56 AM 
Subject: Re: [openstack-dev] Duplicate code for processing REST APIs 




On 13 March 2014 21:13, Roman Podoliaka rpodoly...@mirantis.com wrote: 
 Hi Steven, 
 
 Code from openstack/common/ dir is 'synced' from oslo-incubator. The 
 'sync' is effectively a copy of oslo-incubator subtree into a project 
 source tree. As syncs are not done at the same time, the code of 
 synced modules may indeed by different for each project depending on 
 which commit of oslo-incubator was synced. 


Worth noting that there have been a few cases of projects patching 
OSLO bugs intheir own tree rather than fixing in OSLO then resyncing. 
If anybody has any tooling that can detect that, I'd love to see the 
results. 

I'm generally of the opinion that cinder is likely to be resistant to 
more parts of OSLO being used in cinder unless they are a proper 
library - syncs have caused us significant pain, code churn, review 
load and bugs in the last 12 months. I am but one voice among many, 
but I know I'm not the only member of core who feels this to be the 
case. Hopefully I can spend some time with OSLO core at the summit and 
discuss the problems I've found. 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


Duncan, 

I will come with you for that discussion. :-) Have some thoughts and questions 
to share as well. 

Regardless, I think we need to make sure to actually get our Oslo syncs started 
for Cinder early 
in Juno. We are way behind on db and db.sqlalchemy. Planning to propose changes 
to that as soon 
as we switch over. 

Jay 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Developer Documentation - Architecture design docs in Google docs

2014-03-14 Thread Collins, Sean
Hi,

I just read through some of the design docs that were linked to the DVR
blueprints - and realized we probably have a ton of Neutron developer
documentation and architecture stuff in Google docs.

Should we try and gather them all up and place links to them in
the Developer Documentation - or perhaps even bring them into the
developer documentation tree (if they are not currently undergoing
discussion and development)?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Advanced Services Common requirements IRC meeting

2014-03-14 Thread Sumit Naiksatam
Here is a summary from yesterday's meeting:

** Flavors (topic lead: Eugene Nikanorov)
* Hide the provider implementation details from the user
  - The admin API would still need to be able to configure the
provider artifacts
  - Suggestion to leverage existing provider model for this
  - Also suggestion to optionally expose vendor in the tenant-facing API
* It should have generic application to Neutron services
* It should be able to represent different flavors of the same
service (supported by the same or different backend providers)
* Should the tenant facing abstraction support exact match and/or
loose semantics for flavor specifications?
  - This does not necessarily have to be mutually exclusive. We could
converge on a base set of attributes as a part of the generic and
common definition across services. There could be additional
(extended) attributes that can be exposed per backend provider (and
might end up being specific to that deployment)
* Name of this abstraction, we did not discuss this

** Service Insertion/Chaining (topic lead: Sumit Naiksatam)
* Service context
  - general agreement on what is being introduced in:
https://review.openstack.org/#/c/62599
* Service chain
  - Can a flavor capture the definition of a service chain. Current
thinking is yes.
  - If so, we need to discuss more on the feasibility of tenant
created service chains
  - The current approach specified in:
https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering
does not preclude this.

** Vendor plugins for L3 services (topic lead: Paul Michali)
* how to handle/locate vendor config files
* way to do vendor validation (e.g. validate, commit, apply ~ to
precommit/postcommit)
* How to tell client what vendor capabilities are
* How to report to plugin status, when there are problems
* I've seen a bunch of these issues with VPN development and imagine
other svcs do to.
* Should we setup a separate IRC to discuss some ideas on this?

** Requirements for group policy framework
* We could not cover this

** Logistics
* The feedback was to continue this meeting on a weekly basis (since
lots more discussions are needed on these and other topics), and
change the day/time to Wednesdays at 1700 UTC on #openstack-meeting-3

Meeting wiki page and logs can be found here:
https://wiki.openstack.org/wiki/Meetings/AdvancedServices

Thanks,
~Sumit.


On Wed, Mar 12, 2014 at 9:20 PM, Sumit Naiksatam
sumitnaiksa...@gmail.com wrote:
 Hi,

 This is a reminder - we will be having this meeting in
 #openstack-meeting-3 on March 13th (Thursday) at 18:00 UTC. The
 proposed agenda is as follows:

 * Flavors/service-type framework
 * Service insertion/chaining
 * Group policy requirements
 * Vendor plugins for L3 services

 We can also decide the time/day/frequency of future meetings.

 Meeting wiki: https://wiki.openstack.org/wiki/Meetings/AdvancedServices

 Thanks,
 ~Sumit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-14 Thread Ben Nemec

On 2014-03-13 11:12, James Slagle wrote:

On Thu, Mar 13, 2014 at 2:51 AM, Robert Collins
robe...@robertcollins.net wrote:

So we already have pretty high requirements - its basically a 16G
workstation as minimum.

Specifically to test the full story:
 - a seed VM
 - an undercloud VM (bm deploy infra)
 - 1 overcloud control VM
 - 2 overcloud hypervisor VMs

   5 VMs with 2+G RAM each.

To test the overcloud alone against the seed we save 1 VM, to skip the
overcloud we save 3.

However, as HA matures we're about to add 4 more VMs: we need a HA
control plane for both the under and overclouds:
 - a seed VM
 - 3 undercloud VMs (HA bm deploy infra)
 - 3 overcloud control VMs (HA)
 - 2 overcloud hypervisor VMs

   9 VMs with 2+G RAM each == 18GB

What should we do about this?

A few thoughts to kick start discussion:
 - use Ironic to test across multiple machines (involves tunnelling
brbm across machines, fairly easy)
 - shrink the VM sizes (causes thrashing)
 - tell folk to toughen up and get bigger machines (ahahahahaha, no)
 - make the default configuration inline the hypervisors on the
overcloud with the control plane:
   - a seed VM
   - 3 undercloud VMs (HA bm deploy infra)
   - 3 overcloud all-in-one VMs (HA)
  
 7 VMs with 2+G RAM each == 14GB


I think its important that we exercise features like HA and live
migration regularly by developers, so I'm quite keen to have a fairly
solid systematic answer that will let us catch things like bad
firewall rules on the control node preventing network tunnelling
etc... e.g. we benefit the more things are split out like scale
deployments are. OTOH testing the micro-cloud that folk may start with
is also a really good idea



The idea I was thinking was to make a testenv host available to
tripleo atc's. Or, perhaps make it a bit more locked down and only
available to a new group of tripleo folk, existing somewhere between
the privileges of tripleo atc's and tripleo-cd-admins.  We could
document how you use the cloud (Red Hat's or HP's) rack to start up a
instance to run devtest on one of the compute hosts, request and lock
yourself a testenv environment on one of the testenv hosts, etc.
Basically, how our CI works. Although I think we'd want different
testenv hosts for development vs what runs the CI, and would need to
make sure everything was locked down appropriately security-wise.

Some other ideas:

- Allow an option to get rid of the seed VM, or make it so that you
can shut it down after the Undercloud is up. This only really gets rid
of 1 VM though, so it doesn't buy you much nor solve any long term
problem.

- Make it easier to see how you'd use virsh against any libvirt host
you might have lying around.  We already have the setting exposed, but
make it a bit more public and call it out more in the docs. I've
actually never tried it myself, but have been meaning to.

- I'm really reaching now, and this may be entirely unrealistic :),
butsomehow use the fake baremetal driver and expose a mechanism to
let the developer specify the already setup undercloud/overcloud
environment ahead of time.
For example:
* Build your undercloud images with the vm element since you won't be
PXE booting it
* Upload your images to a public cloud, and boot instances for them.
* Use this new mechanism when you run devtest (presumably running from
another instance in the same cloud)  to say I'm using the fake
baremetal driver, and here are the  IP's of the undercloud instances.
* Repeat steps for the overcloud (e.g., configure undercloud to use
fake baremetal driver, etc).
* Maybe it's not the fake baremetal driver, and instead a new driver
that is a noop for the pxe stuff, and the power_on implementation
powers on the cloud instances.
* Obviously if your aim is to test the pxe and disk deploy process
itself, this wouldn't work for you.
* Presumably said public cloud is OpenStack, so we've also achieved
another layer of On OpenStack.


I actually spent quite a while looking into something like this last 
option when I first started on TripleO, because I had only one big 
server locally and it was running my OpenStack installation.  I was 
hoping to use it for my TripleO instances, and even went so far as to 
add support for OpenStack to the virtual power driver in baremetal.  I 
was never completely successful, but I did work through a number of 
problems:


1. Neutron didn't like allowing the DHCP/PXE traffic to let my seed 
serve to the undercloud.  I was able to get around this by using flat 
networking with a local bridge on the OpenStack system, but I'm not sure 
if that's going to be possible on most public cloud providers.  There 
may very well be a less invasive way to configure Neutron to allow that, 
but I don't know how to do it.


2. Last time I checked, Nova doesn't support PXE booting instances so I 
had to use iPXE images to do the booting.  This doesn't work since we 
PXE boot every time an instance reboots and the iPXE image gets 

[openstack-dev] [qa][neutron] neutron version in openstack/nova neutron tests

2014-03-14 Thread Darragh O'Reilly
I'm looking at errors in the dhcp-agent using logstash and I see openstack/nova 
Jenkins jobs running today [1] that have an outdated 
neutron/openstack/common/lockutils.py. The message format used to be 1 line per 
lock/unlock eg:

Got semaphore dhcp-agent for method subnet_delete_end... 

But since review 47557 merged on 2014-01-13 [2] it should 3 lines per 
lock/unlock  like:

Got semaphore dhcp-agent lock 
Got semaphore / lock sync_state
Semaphore / lock released sync_state

And these show in openstack/neutron jobs. I would have thought openstack/nova 
jobs would be using the latest or very recent neutron?

[1] 
http://logs.openstack.org/25/75825/1/gate/gate-tempest-dsvm-neutron/71b9183/logs/screen-q-dhcp.txt.gz#_2014-03-14_00_23_35_716
[2] 
https://review.openstack.org/#/c/47557/20/neutron/openstack/common/lockutils.py
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Savanna 2014.1.b3 (Icehouse-3) dev milestone available

2014-03-14 Thread Matthew Farrellee

On 03/06/2014 04:00 PM, Sergey Lukjanov wrote:

Hi folks,

the third development milestone of Icehouse cycle is now available for Savanna.

Here is a list of new features and fixed bug:

https://launchpad.net/savanna/+milestone/icehouse-3

and here you can find tarballs to download it:

http://tarballs.openstack.org/savanna/savanna-2014.1.b3.tar.gz
http://tarballs.openstack.org/savanna-dashboard/savanna-dashboard-2014.1.b3.tar.gz
http://tarballs.openstack.org/savanna-image-elements/savanna-image-elements-2014.1.b3.tar.gz
http://tarballs.openstack.org/savanna-extra/savanna-extra-2014.1.b3.tar.gz

There are 20 blueprint implemented, 45 bugs fixed during the
milestone. It includes savanna, savanna-dashboard,
savanna-image-element and savanna-extra sub-projects. In addition
python-savannaclient 0.5.0 that was released early this week supports
all new features introduced in this savanna release.

Thanks.



rdo packages -

f21 - savanna - http://koji.fedoraproject.org/koji/taskinfo?taskID=6634141
el6 - savanna - http://koji.fedoraproject.org/koji/taskinfo?taskID=6634119

f21 - python-django-savanna - 
http://koji.fedoraproject.org/koji/taskinfo?taskID=6634139
el6 - python-django-savanna - 
http://koji.fedoraproject.org/koji/taskinfo?taskID=6634116


best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-14 Thread Ben Nemec

On 2014-03-14 14:49, Solly Ross wrote:

It would also be great if there was a way to only sync one package.


There is. :-)

--nodeps will only sync the modules specified on the command line: 
https://wiki.openstack.org/wiki/Oslo#Syncing_Code_from_Incubator


That said, it's not always safe to do that.  You might sync a change in 
one module that depends on a change in another module and end up 
breaking something.  It might not be caught in the sync either because 
the Oslo unit tests don't get synced across.



When adding a new library
to a project (e.g. openstack.common.report to Nova), one would want to
only sync the openstack.common.report
parts, and not the any changes from the rest of openstack.common.  My
process has been

1. Edit openstack-common.conf to only contain the packages I want
2. Run the update
3. Make sure there wasn't code that didn't get changed from
'openstack.common.xyz' to 'nova.openstack.common.xyz' (hint: this
happens some times)
4. git checkout openstack-common.conf to revert the changes to
openstack-common.conf

IMHO, update.py needs a bit of work (well, I think the whole code
copying thing needs a bit of work, but that's a different story).

Best Regards,
Solly Ross

- Original Message -
From: Jay S Bryant jsbry...@us.ibm.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Friday, March 14, 2014 3:36:49 PM
Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py




From: Brant Knudson b...@acm.org
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date: 03/14/2014 02:21 PM
Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py







On Fri, Mar 14, 2014 at 2:05 PM, Jay S Bryant  jsbry...@us.ibm.com  
wrote:


It would be great if we could get the process for this automated. In 
the
mean time, those of us doing the syncs will just have to slog through 
the

process.

Jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


What's the process? How do I generate the list of changes?

Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Brant,

My process thus far has been the following:


1. Do the sync to see what files are changed.
2. Take a look at the last commit sync'd to what is currently in
master for a file.
3. Document all the commits that have come in on that file since.
4. Repeat process for all the relevant files if there is more than 
one.

5. If are multiples files I organize the commits with a list of
the files touched by that commit.
6. Document the master level of Oslo when the sync was done for 
reference.


Process may not be perfect, but it gets the job done. Here is an
example of the format I use: https://review.openstack.org/#/c/75740/

Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] does exception need localize or not?

2014-03-14 Thread Doug Hellmann
On Thu, Mar 13, 2014 at 6:44 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

   From: Doug Hellmann doug.hellm...@dreamhost.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, March 13, 2014 at 12:44 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] does exception need localize or not?




 On Thu, Feb 27, 2014 at 3:45 AM, yongli he yongli...@intel.com wrote:

 refer to :
 https://wiki.openstack.org/wiki/Translations

 now some exception use _ and some not.  the wiki suggest do not to do
 that. but i'm not sure.

 what's the correct way?


 F.Y.I

 What To Translate

 At present the convention is to translate *all* user-facing strings.
 This means API messages, CLI responses, documentation, help text, etc.

 There has been a lack of consensus about the translation of log messages;
 the current ruling is that while it is not against policy to mark log
 messages for translation if your project feels strongly about it,
 translating log messages is not actively encouraged.


  I've updated the wiki to replace that paragraph with a pointer to
 https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation which
 explains the log translation rules. We will be adding the job needed to
 have different log translations during Juno.



  Exception text should *not* be marked for translation, becuase if an
 exception occurs there is no guarantee that the translation machinery will
 be functional.


  This makes no sense to me. Exceptions should be translated. By far the
 largest number of errors will be presented to users through the API or
 through Horizon (which gets them from the API). We will ensure that the
 translation code does its best to fall back to the original string if the
 translation fails.

  Doug





 Regards
 Yongli He


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  I think this question comes up every 3 months, haha ;)

  As we continue to expand all the libraries in
 https://github.com/openstack/requirements/blob/master/global-requirements.txt 
 and
 knowing that those libraries likely don't translate there exceptions
 (probably in the majority of cases, especially in non-openstack/oslo 3rd
 party libraries) are we chasing a ideal that can not be caught?

  Food for thought,


We can't control what the other projects do, but that doesn't prevent us
from doing more.

Doug




  -Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-14 Thread Doug Hellmann
On Fri, Mar 14, 2014 at 12:48 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 On 13 March 2014 21:13, Roman Podoliaka rpodoly...@mirantis.com wrote:
  Hi Steven,
 
  Code from openstack/common/ dir is 'synced' from oslo-incubator. The
  'sync' is effectively a copy of oslo-incubator subtree into a project
  source tree. As syncs are not done at the same time, the code of
  synced modules may indeed by different for each project depending on
  which commit of oslo-incubator was synced.


 Worth noting that there have been a few cases of projects patching
 OSLO bugs intheir own tree rather than fixing in OSLO then resyncing.
 If anybody has any tooling that can detect that, I'd love to see the
 results.

 I'm generally of the opinion that cinder is likely to be resistant to
 more parts of OSLO being used in cinder unless they are a proper
 library - syncs have caused us significant pain, code churn, review
 load and bugs in the last 12 months. I am but one voice among many,
 but I know I'm not the only member of core who feels this to be the
 case. Hopefully I can spend some time with OSLO core at the summit and
 discuss the problems I've found.


We will be working on releasing code from the incubator into libraries
during Juno. Projects that want their snowflake fixes included in those
libraries should submit them to the incubator ASAP.

Doug




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-14 Thread Tim Bell

I think we need to split the scenarios and focus on the end user experience 
with the cloud 

 a few come to my mind from the CERN experience (but this may not be all):

1. Accidental deletion of an object (including meta data)
2. Multi-level consistency (such as between Cell API and child instances)
3. Auditing

CERN has the scenario 1 at a reasonable frequency. Ultimately, it is due to 
error by
--
A - the openstack administrators themselves
B - the delegated project administrators
C - users with a non-optimised scope for administrative action
D - users who make mistakes

It seems that we should handle these as different cases

3 - make sure there is a log entry (ideally off the box) for all operations
2 - up to the component implementers but with the aim to expire deleted entries 
as soon as reasonable consistency is achieved
1[A-D] - how can we recover from operator/project admin/user error ?

I understand that there are differing perspectives from cloud to server 
consolidation but my cloud users expect that if they create a one-off virtual 
desktop running Windows for software testing and install a set of software, I 
don't tell them it was accidentally deleted due to operator error (1A or 1B), 
you need to re-create it.

Tim

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 14 March 2014 16:55
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
 deletion (step by step)
 
 On Fri, 2014-03-14 at 08:37 +0100, Radomir Dopieralski wrote:
  Hello,
 
  I also think that this thread is going in the wrong direction, but I
  don't think the direction Boris wants is the correct one either.
  Frankly I'm a little surprised that nobody mentioned another advantage
  that soft delete gives us, the one that I think it was actually used for 
  originally.
 
  You see, soft delete is an optimization. It's there to make the system
  work faster as a whole, have less code and be simpler to maintain and debug.
 
  How does it do it, when, as clearly shown in the first post in this
  thread, it makes the queries slower, requires additional indices in
  the database and more logic in the queries?
 
 I feel it isn't an optimization if:
 
 * It slows down the code base
 * Makes the code harder to read and understand
 * Deliberately obscures the actions of removing and restoring resources
 * Encourages the idea that everything in the system is undoable, like the 
 cloud is a Word doc.
 
   The answer is, by doing more
  with those queries, by making you write less code, execute fewer
  queries to the databases and avoid duplicating the same data in multiple 
  places.
 
 Fewer queries does not aklways make faster code, nor does it lead to 
 inherently race-free code.
 
  OpenStack is a big, distributed system of multiple databases that
  sometimes rely on each other and cross-reference their records. It's
  not uncommon to have some long-running operation started, that uses
  some data, and then, in the middle of its execution, have that data deleted.
  With soft delete, that's not a problem -- the operation can continue
  safely and proceed as scheduled, with the data it was started with in
  the first place -- it still has access to the deleted records as if
  nothing happened.
 
 I believe a better solution would be to use Boris' solution and implement 
 safeguards around the delete operation. For instance, not
 being able to delete an instance that has tasks still running against it. 
 Either that, or implement true task abortion logic that can
 notify distributed components about the need to stop a running task because 
 either the user wants to delete a resource or simply
 cancel the operation they began.
 
   You simply won't be able to schedule another operation like that with
  the same data, because it has been soft-deleted and won't pass the
  validation at the beginning (or even won't appear in the UI or CLI).
  This solves a lot of race conditions, error handling, additional
  checks to make sure the record still exists, etc.
 
 Sorry, I disagree here. Components that rely on the soft-delete behavior to 
 get the resource data from the database should instead
 respond to a NotFound that gets raised by aborting their running task.
 
  Without soft delete, you need to write custom code every time to
  handle the case of a record being deleted mid-operation, including all
  the possible combinations of which record and when.
 
 Not custom code. Explicit code paths for explicit actions.
 
   Or you need to copy all
  the relevant data in advance over to whatever is executing that
  operation.
 
 This is already happening.
 
  This cannot be abstracted away entirely (although tools like TaskFlow
  help), as this is specific to the case you are handling. And it's not
  easy to find all the places where you can have a race condition like
  that -- especially when you are modifying existing code that has been
  

Re: [openstack-dev] [qa][neutron] neutron version in openstack/nova neutron tests

2014-03-14 Thread Darragh O'Reilly


ah - forget it - it was stable/havana of course


https://review.openstack.org/#/c/75825/


On Friday, 14 March 2014, 20:29, Darragh O'Reilly 
dara2002-openst...@yahoo.com wrote:
 
I'm looking at errors in the dhcp-agent using logstash and I see openstack/nova 
Jenkins jobs running today [1] that have an outdated 
neutron/openstack/common/lockutils.py. The message format used to be 1 line per 
lock/unlock eg:

Got semaphore dhcp-agent for method subnet_delete_end... 

But since review 47557 merged on 2014-01-13 [2] it should 3 lines per 
lock/unlock  like:

Got semaphore dhcp-agent lock 
Got semaphore / lock sync_state
Semaphore / lock released sync_state

And these show in openstack/neutron jobs. I would have thought openstack/nova 
jobs would be using the latest or very recent neutron?

[1]
 
http://logs.openstack.org/25/75825/1/gate/gate-tempest-dsvm-neutron/71b9183/logs/screen-q-dhcp.txt.gz#_2014-03-14_00_23_35_716
[2] 
https://review.openstack.org/#/c/47557/20/neutron/openstack/common/lockutils.py


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stable/havana and trunk clients

2014-03-14 Thread Joe Gordon
Hi All, so as some of you may have noticed the stable/havana jobs just
broke.

The stable/havana jobs are missing a dependency on oathlib [1]. It looks
like like oauthlib is a keystoneclient dependency, but it wasn't in the
global-requirements for stable/havana [2], so it never got installed.
Currently if a dependency is not in the global-requirements file it just
gets dropped during an install instead of erroring out [3].


For stable jobs we use stable servers and trunk clients because all clients
are supposed to be backwards compatible. But we don't gate client changes
on the stable server branches, opening up a hole in our gate to wedge the
stable branches. To fix this I propose gating all clients on the stable
jobs [4]. Doing so would close this hole, but also force us to do a better
job of not letting stable branches break.


Thoughts?

best,
Joe

[1]
http://logs.openstack.org/35/80435/1/check/check-tempest-dsvm-full/269edcf/logs/screen-h-eng.txt.gz
[2] Fix: *https://review.openstack.org/#/c/80687/
https://review.openstack.org/#/c/80687/*
[3] Fix:https://review.openstack.org/#/c/80690/
[4] https://review.openstack.org/#/c/80698/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-14 Thread Mohammad Banikazemi

We have started looking at how the Neutron advanced services being defined
and developed right now can be used within the Neutron policy framework we
are building. Furthermore, we have been looking at a new model for the
policy framework as of the past couple of weeks. So, I have been trying to
see how the services will fit in (or can be utilized by) the policy work in
general and with the new contract-based model we are considering in
particular. Some of the I like to discuss here are specific to the use of
service chains with the group policy work but some are generic and related
to service chaining itself.

If I understand it correctly, the proposed service chaining model requires
the creation of the services in the chain without specifying their
insertion contexts. Then, the service chain is created with specifying the
services in the chain, a particular provider (which is specific to the
chain being built) and possibly source and destination insertion contexts.

1- This fits ok with the policy model we had developed earlier where the
policy would get defined between a source and a destination policy endpoint
group. The chain could be instantiated at the time the policy gets defined.
(More questions on the instantiation below marked as 1.a and 1.b.) How
would that work in a contract based model for policy? At the time a
contract is defined, it's producers and consumers are not defined yet.
Would we postpone the instantiation of the service chain to the time a
contract gets a producer and at least a consumer?

1.a- It seems to me, it would be helpful if not necessary to be able to
define a chain without instantiating the chain. If I understand it
correctly, in the current service chaining model, when the chain is
created, the source/destination contexts are used (whether they are
specified explicitly or implicitly) and the chain of services become
operational. We may want to be able to define the chain and postpone its
creation to a later point in time.

1.b-Is it really possible to stand up a service without knowing its
insertion context (explicitly defined or implicitly defined) in all cases?
For certain cases this will be ok but for others, depending on the
insertion context or other factors such as the requirements of other
services in the chain we may need to for example instantiate the service
(e.g. create a VM) at a specific location that is not known when the
service is created. If that may be the case, would it make sense to not
instantiate the services of a chain at any level (rather than instantiating
them and mark them as not operational or not routing traffic to them)
before the chain is created? (This leads to question 3 below.)

2- With one producer and multiple consumers, do we instantiate a chain
(meaning the chain and the services in the chain become operational) for
each consumer? If not, how do we deal with using the same
source/destination insertion context pair for the provider and all of the
consumers?

3- For the service chain creation, I am sure there are good reasons for
requiring a specific provider for a given chain of services but wouldn't it
be possible to have a generic chain provider which would instantiate each
service in the chain using the required provider for each service (e.g.,
firewall or loadbalancer service) and with setting the insertion contexts
for each service such that the chain gets constructed as well? I am sure I
am ignoring some practical requirements but is it worth rethinking the
current approach?

Best,

Mohammad
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-14 Thread Dan Smith
 Just to answer this point, despite the review latency, please don't be
 tempted to think one big change will get in quicker than a series of
 little, easy to review, changes. All changes are not equal. A large
 change often scares me away to easier to review patches.
 
 Seems like, for Juno-1, it would be worth cancelling all non-urgent
 bug fixes, and doing the refactoring we need.
 
 I think the aim here should be better (and easier to understand) unit
 test coverage. Thats a great way to drive good code structure.

Review latency will be directly affected by how good the refactoring
changes are staged. If they are small, on-topic and easy to validate,
they will go quickly. They should be linearized unless there are some
places where multiple sequences of changes make sense (i.e. refactoring
a single file that results in no changes required to others).

As John says, if it's just a big change everything patch, or a ton of
smaller ones that don't fit a plan or process, then it will be slow and
painful (for everyone).

 +1 sounds like a good first step is to move to oslo.vmware

I'm not sure whether I think that refactoring spawn would be better done
first or second. My gut tells me that doing spawn first would mean that
we could more easily validate the oslo refactors because (a) spawn is
impossible to follow right now and (b) refactoring it to smaller methods
should be fairly easy. The tests for spawn are equally hard to follow
and refactoring it first would yield a bunch of more unit-y tests that
would help us follow the oslo refactoring.

However, it sounds like the osloificastion has maybe already started and
that refactoring spawn will have to take a backseat to that.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Manual scheduling nodes in maintenance mode

2014-03-14 Thread Devananda van der Veen
+1 to the idea.

However, I think we should discuss whether the rescue interface is the
appropriate path. It's initial intention was to tie into Nova's rescue
interface, allowing a user whose instance is non-responsive to boot into a
recovery mode and access the data stored within their instance. I think
there are two different use-cases here:

Case A: a user of Nova who somehow breaks their instance, and wants to boot
into a rescue or recovery mode, preserving instance data. This is
useful if, eg, they lost network access or broke their grub config.

Case B: an operator of the baremetal cloud whose hardware may be
malfunctioning, who wishes to hide that hardware from users of Case A while
they diagnose and fix the underlying problem.

As I see it, Nova's rescue API (and by extension, the same API in Ironic)
is intended for A, but not for B.  TripleO's use case includes both of
these, and may be conflating them.

I believe Case A is addressed by the planned driver.rescue interface. As
for Case B, I think the solution is to use different tenants and move the
node between them. This is a more complex problem -- Ironic does not model
tenants, and AFAIK Nova doesn't reserve unallocated compute resources on a
per-tenant basis.

That said, I think we will need a way to indicate this bare metal node
belongs to that tenant, regardless of the rescue use case.

-Deva



On Fri, Mar 14, 2014 at 5:01 AM, Lucas Alvares Gomes
lucasago...@gmail.comwrote:

 On Wed, Mar 12, 2014 at 8:07 PM, Chris Jones c...@tenshu.net wrote:


 Hey

 I wanted to throw out an idea that came to me while I was working on
 diagnosing some hardware issues in the Tripleo CD rack at the sprint last
 week.

 Specifically, if a particular node has been dropped from automatic
 scheduling by the operator, I think it would be super useful to be able to
 still manually schedule the node. Examples might be that someone is
 diagnosing a hardware issue and wants to boot an image that has all their
 favourite diagnostic tools in it, or they might be booting an image they
 use for updating firmwares, etc (frankly, just being able to boot a
 generic, unmodified host OS on a node can be super useful if you're trying
 to crash cart the machine for something hardware related).

 Any thoughts? :)


 +1 I like the idea and think it's quite useful.

 Drivers in Ironic already expose a rescue interface[1] (which I don't
 think we had put much thoughts into it yet) perhaps the PXE driver might
 implement something similar to what you want to do here?

 [1]
 https://github.com/openstack/ironic/blob/master/ironic/drivers/base.py#L60

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stable/havana and trunk clients

2014-03-14 Thread Joe Gordon
On Fri, Mar 14, 2014 at 2:50 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All, so as some of you may have noticed the stable/havana jobs just
 broke.


python-keystoneclient patch https://review.openstack.org/#/c/77977/ Broke
stable

https://review.openstack.org/#/c/80726/ Reverts that change



 The stable/havana jobs are missing a dependency on oathlib [1]. It looks
 like like oauthlib is a keystoneclient dependency, but it wasn't in the
 global-requirements for stable/havana [2], so it never got installed.
 Currently if a dependency is not in the global-requirements file it just
 gets dropped during an install instead of erroring out [3].


 For stable jobs we use stable servers and trunk clients because all
 clients are supposed to be backwards compatible. But we don't gate client
 changes on the stable server branches, opening up a hole in our gate to
 wedge the stable branches. To fix this I propose gating all clients on the
 stable jobs [4]. Doing so would close this hole, but also force us to do a
 better job of not letting stable branches break.


 Thoughts?

 best,
 Joe

 [1]
 http://logs.openstack.org/35/80435/1/check/check-tempest-dsvm-full/269edcf/logs/screen-h-eng.txt.gz
 [2] Fix: *https://review.openstack.org/#/c/80687/
 https://review.openstack.org/#/c/80687/*
 [3] Fix:https://review.openstack.org/#/c/80690/
 [4] https://review.openstack.org/#/c/80698/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving swift3 to stackforge (was: Re: [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka)

2014-03-14 Thread Pete Zaitcev
On Fri, 14 Mar 2014 09:03:22 +0100
Chmouel Boudjnah chmo...@enovance.com wrote:

 fujita (the maint of swift3 in CC of this email)  has commented that he's
 been working on it.

I think we should've not kicked it out. Maybe just re-fold it
back into Swift?

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev