Re: [openstack-dev] [Is there horizon implementation for ironic]

2014-05-20 Thread 严超
I see.
Thank you !

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*


2014-05-19 22:24 GMT+08:00 Devananda van der Veen devananda@gmail.com:

 On Mon, May 19, 2014 at 12:58 AM, 严超 yanchao...@gmail.com wrote:
 
  Hi, All :
  Ironic is a project for us to control bare metal better. Is
 there any horizon implementation for ironic to use ironic api and function
 easyly?
 
  Best Regards!
  Chao Yan

 The Tuskar UI team is working on a UI for Ironic as well. I met with
 Jaromir to go over a draft of their design last week, but as far as I know,
 there's no usable code / horizon panel just yet.

 -Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][DiskImage-builder]How to set up network during the deploy

2014-05-20 Thread Tan, Lin
Hi,

I am working on setting up Baremetal for a while. I notice that in the process 
of deployment in TripleO, I mean in the first round of PXE boot, it will append 
the IP info in the configuration and pass to the target machine as kernel 
parameters. Then init script will read the parameters and set the network like 
eth0 and turn it up. why do we have to do like this?
Because in my case, I want to avoid to pass the IP info as kernel parameters, 
but I still need to set up the network of target machine. I have two ideas now, 
but I am not sure:

1.   Get the IP info from the PXE client in target Machine

2.   Add some new elements in the deploy_ramdisk in order to request the IP 
from the DHCP again.

My question is which way is more reasonable?

Thanks in advanced

Best Regards,

Tan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Unifying DB schema

2014-05-20 Thread Anna Kamyshnikova
Hello everyone!

Earlier topic of unconditional migrations was discussed  in emails by me
and Salvatore. On summit there was a small meeting on which was discussed
this topic and some others. I haven't participated in this meeting but
member of my team Eugene was there instead of me and told me what was
decided to do.

The idea is to create some methods that will check current table state:
 - if it exists or not
 -  if all necessary changes have been made or not.
(All changes are checked from the very beginning till Icehouse)

I was inspired of this idea and make some notes about that. They are shown
there
https://docs.google.com/document/d/10p6JKIQf_rymBuNeOywjHiv53cRTfz5kBg8LeUCeykI/edit?usp=sharingand
a test change that will show  how this is going to work
https://review.openstack.org/93690.

The organizer of all this work is Henry Gessau. Now he is working on bp on
this topic.

I look forward to any comments about my notes and my test changes.

Regards,
Ann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][mysql] IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE

2014-05-20 Thread Julien Danjou
On Mon, May 19 2014, Jay Pipes wrote:

 I think at that point I mentioned that there were a number of places that
 were using the SELECT ... FOR UPDATE construct in Nova (in SQLAlchemy, it's
 the with_lockmode('update') modification of the query object). Peter
 promptly said that was a problem. MySQL Galera does not support SELECT ...
 FOR UPDATE, since it has no concept of cross-node locking of records and
 results are non-deterministic.

So you send a command that's not supported and the whole software
deadlocks? Is there a bug number about that or something? I cannot
understand how this can be possible and considered as something normal
(that's the feeling I have reading your mail, I may be wrong).

 We have a number of options:

 1) Stop using MySQL Galera for databases of projects that contain
 with_lockmode('update')

 2) Put a big old warning in the docs somewhere about the problem of
 potential deadlocks or odd behaviour with Galera in these projects

 3) For Nova and Neutron, remove the use of with_lockmode('update') and
 instead use a coarse-grained file lock or a distributed lock manager for
 those areas where we need deterministic reads or quiescence.

 4) For the Nova db quota driver, refactor the driver to either use a
 non-locking method for reservation and quota queries or move the driver out
 into its own projects (or use something like Climate and make sure that
 Climate uses a non-blocking algorithm for those queries...)

 Thoughts?

5) Stop leveling down our development, and rely and leverage a powerful
RDBMS that provides interesting feature, such as PostgreSQL.

Sorry, had to say it, but it's pissing me off to see the low quality of
the work that is done around SQL in OpenStack.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ANNOUNCE: New Nova Libvirt (Sub-)Team + Meeting

2014-05-20 Thread Daniel P. Berrange
Reminder that the first meeting will kick off today at 1500 UTC on
#openstack-meeting-3  Please add agenda items to the etherpad if you
wish to discuss them this week:

  https://etherpad.openstack.org/p/nova-libvirt-meeting-agenda

Regards,
Daniel

On Fri, May 16, 2014 at 01:10:40PM -0400, Daniel P. Berrange wrote:
 Hi Nova developers,
 
 Since Nova already has sub-teams for HyperV, VMWare, and XenAPI, I feel that
 it would be a worthwhile effort to introduce a sub-team + meeting for the
 Nova Libvirt driver:
 
 https://wiki.openstack.org/wiki/Nova#Nova_subteams
 https://wiki.openstack.org/wiki/Meetings/Libvirt
 https://etherpad.openstack.org/p/nova-libvirt-meeting-agenda
 
 I have arbitrarily picked Tuesdays at 1500 UTC on IRC #openstack-meeting-3
 as the time + place for the meeting. If this turns out to be horrible for
 a significant number of people, we can discuss alternate times (as long as
 they are not friday evenings ;-), or alternate between 2 times. Currently
 this time point works out as
 
 08:00 San Francisco
 11:00 Boston
 15:00 UTC
 16:00 London
 17:00 Berlin
 20:30 Mumbai
 23:00 Bejing
 24:00 Tokyo
 
 
 http://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=00sec=0p1=0
 
 So I suggest the first meeting take place next week:
 
Tuesday May 20th at 15:00 UTC
 
 I don't want to add bureaucracy to the libvirt driver development workflow.
 Rather I intend that this meeting is a way to facilitate libvirt related
 discussions between different parties/companies and to resolve roadblocks
 that people working on libvirt may be facing. It should also be a place for
 other OpenStack teams (Neutron/Glance/Infra/etc) to come to meeting the
 Nova libvirt team and raise topics they have.
 
 
 If you want to attend this meeting, please record topics for discussion
 in the etherpad (and put your name + IRC nick against items)
 
https://etherpad.openstack.org/p/nova-libvirt-meeting-agenda
 
 
 Historically KVM / QEMU have got the majority of attention from libvirt
 developers, to the extent that we (unfortunately) caused breakage of both
 LXC and Xen during Icehouse. From mail and design summit discussions, it
 seems there is a critical mass of people interested in raising the quality
 of LXC and Xen during Juno and getting gate CI up  running. So I think
 this meeting would be a good place for those interested in Libvirt LXC 
 Xen support to coordinate their initial efforts / planning.
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] request for review for backport of bug 1240849 to havana

2014-05-20 Thread Thierry Carrez
George Shuklin wrote:
 Good day.
 
 Could someone, please, review backport of
 https://bugs.launchpad.net/nova/+bug/1240849 to stable/havana.
 
 I've checked it on my laboratory and it fixes problem with 'no network
 after soft reboot', but I've done some invasive changes to the logic, so
 if someone with good neutron internals knowledge checks me, it will be
 very helpful.
 
 Link to review: https://review.openstack.org/#/c/93343/

AFAICT this is a neutron review, not a nova one ?

I added a havana task to the neutron bug so that it doesn't fall into
the cracks and gets noticed by the stable-maint team.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Divergence of *-specs style checking

2014-05-20 Thread Yuriy Taraday
Great idea!

On Mon, May 19, 2014 at 8:38 PM, Alexis Lee alex...@hp.com wrote:

 Potentially the TITLES structure could
 be read from a per-project YAML file and the test itself could be drawn
 from some common area?


I think you can get that data from template.rst file by parsing it and
analyzing the tree.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] failing postgres jobs

2014-05-20 Thread Joe Gordon
Hi All,

If you hit an unknown error in a postgres job since Tue May 20 00:30:48
2014 + you probably hit *https://bugs.launchpad.net/trove/+bug/1321093
https://bugs.launchpad.net/trove/+bug/1321093*
(*-tempest-dsvm-postgres-full failing on trove-manage db_sync)

A fix is in the works: https://review.openstack.org/#/c/94315/

so once the fix lands, just run 'recheck bug 1321093'

Additional patches are up to prevent this from happening again as well
[0][1].

best,
Joe

[0] https://review.openstack.org/#/c/94307/
[1] https://review.openstack.org/#/c/94314/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] - Integration with neutron using external attachment point

2014-05-20 Thread Igor Cardoso
Hello Kevin.
There is a similar Neutron blueprint [1], originally meant for Havana but
now aiming for Juno.
I would be happy to join efforts with you regarding our blueprints.
See also: [2].

[1] https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
[2] https://blueprints.launchpad.net/neutron/+spec/campus-network


On 19 May 2014 23:52, Kevin Benton blak...@gmail.com wrote:

 Hello,

 I am working on an extension for neutron to allow external attachment
 point information to be stored and used by backend plugins/drivers to place
 switch ports into neutron networks[1].

 One of the primary use cases is to integrate ironic with neutron. The
 basic workflow is that ironic will create the external attachment points
 when servers are initially installed. This step could either be automated
 (extract switch-ID and port number of LLDP message) or it could be manually
 performed by an admin who notes the ports a server is plugged into.

 Then when an instance is chosen for assignment and the neutron port needs
 to be created, the creation request would reference the corresponding
 attachment ID and neutron would configure the physical switch port to place
 the port on the appropriate neutron network.

 If this workflow won't work for Ironic, please respond to this email or
 leave comments on the blueprint review.

 1. https://review.openstack.org/#/c/87825/


 Thanks
 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Igor Duarte Cardoso.
http://igordcard.blogspot.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Consistency between models and migrations

2014-05-20 Thread Anna Kamyshnikova
Hi!

It is nice to see so careful attention for my change requests. In fact they
are related the same topic that Johannes mentioned. The idea of this is
described there
https://blueprints.launchpad.net/neutron/+spec/db-sync-models-with-migrations
.

Problems that mentioned in https://review.openstack.org/#/c/82073/
https://review.openstack.org/#/c/80518/ were discovered with the help of
this test https://review.openstack.org/74081 that I adopted for Neutron
https://review.openstack.org/76520. It helps to find a lot of bugs in
Neutron database and models. Now as this change
https://review.openstack.org/74081 is going to move to oslo.db all of this
work is frozen.

Regards,
Ann



On Tue, May 20, 2014 at 9:02 AM, Johannes Erdfelt johan...@erdfelt.comwrote:

 On Tue, May 20, 2014, Collins, Sean sean_colli...@cable.comcast.com
 wrote:
  I've been looking at two reviews that Ann Kamyshnikova has proposed
 
  https://review.openstack.org/#/c/82073/
 
  https://review.openstack.org/#/c/80518/
 
  I think the changes are fundamentally a Good Thing™  - they appear to
  reduce the differences between the database models and their
  corresponding migrations – as well as fixing differences in the
  generated DDL between Postgres and MySQL.
 
  The only thing I'm concerned about, is how to prevent these
  inconsistencies from sneaking into the codebase in the future. The
  one review that fixes ForeignKey constraints that are missing a name
  argument which ends up failing to create indexes in Postgres – I can
  see myself repeating that mistake, it's very subtle.
 
  Should we have some sort of HACKING for database models and
  migrations? Are these problems subtle enough that they warrant
  changes to SQLAlchemy/Alembic?

 On the Nova side of things, there has been similar concerns.

 There is a nova-spec that is proposing adding a unit test to check the
 schema versus the model:

 https://review.openstack.org/#/c/85325/

 This should work, but I think the underlying problem is DRY based. We
 should not need to declare a schema in a model and then a set of
 imperative tasks to get to that point. All too often they get
 unsynchronized.

 I informally proposed a different solution, moving schema migrations to
 a declarative model. I wrote a proof of concept to show how something
 like this would work:

 https://github.com/jerdfelt/muscle

 We already have a model written (but need some fixes to make it accurate
 wrt to existing migrations), we should be able to emit ALTER TABLE
 statements based on the existing schema to bring it into line with the
 model.

 This also has the added benefit of allowing certain schema migrations to
 be done online, while services are still running. This can significantly
 reduce downtime during deploys (a big concern for large deployments of
 Nova).

 There are some corner cases that do cause problems (renaming columns,
 changing column types, etc). Those can either remain as traditional
 migrations and/or discouraged.

 Data migrations would still remain with sqlalchemy-migrate/alembic, but
 there have some proposals about solving that problem too.

 JE


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] - Integration with neutron using external attachment point

2014-05-20 Thread Akihiro Motoki
# Added [Neutron] tag as well.

Hi Igor,

Thanks for the comment. We already know them as I commented
in the Summit session and ML2 weekly meeting.
Kevin's blueprint now covers Ironic integration and layer2 network gateway
and I believe campus-network blueprint will be covered.

We think the work can be split into generic API definition and implementations
(including ML2). In external attachment point blueprint review, API
and generic topics are mainly discussed so far and the detail
implementation is not discussed
so much yet. ML2 implementation detail can be discussed later
(separately or as a part of the blueprint review).

I am not sure what changes proposed in Blueprint [1].
AFAIK SDN/OpenFlow controller based approach can support this,
but how can we archive this in the existing open source implementation.
I am also interested in the ML2 implementation detail.

Anyway more input will be appreciated.

Thanks,
Akihiro

On Tue, May 20, 2014 at 7:13 PM, Igor Cardoso igordc...@gmail.com wrote:
 Hello Kevin.
 There is a similar Neutron blueprint [1], originally meant for Havana but
 now aiming for Juno.
 I would be happy to join efforts with you regarding our blueprints.
 See also: [2].

 [1] https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
 [2] https://blueprints.launchpad.net/neutron/+spec/campus-network


 On 19 May 2014 23:52, Kevin Benton blak...@gmail.com wrote:

 Hello,

 I am working on an extension for neutron to allow external attachment
 point information to be stored and used by backend plugins/drivers to place
 switch ports into neutron networks[1].

 One of the primary use cases is to integrate ironic with neutron. The
 basic workflow is that ironic will create the external attachment points
 when servers are initially installed. This step could either be automated
 (extract switch-ID and port number of LLDP message) or it could be manually
 performed by an admin who notes the ports a server is plugged into.

 Then when an instance is chosen for assignment and the neutron port needs
 to be created, the creation request would reference the corresponding
 attachment ID and neutron would configure the physical switch port to place
 the port on the appropriate neutron network.

 If this workflow won't work for Ironic, please respond to this email or
 leave comments on the blueprint review.

 1. https://review.openstack.org/#/c/87825/


 Thanks
 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Igor Duarte Cardoso.
 http://igordcard.blogspot.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] Status of v3 tests in tempest

2014-05-20 Thread Sean Dague
On 05/19/2014 11:49 PM, Christopher Yeoh wrote:
 
 
 
 On Mon, May 19, 2014 at 11:58 PM, David Kranz dkr...@redhat.com
 mailto:dkr...@redhat.com wrote:
 
 Removing [nova]
 
 
 On 05/19/2014 02:55 PM, Sean Dague wrote:
 My suggestion is that we stop merging new Nova v3 tests from here
 forward. However I think until we see the fruits of the v2.1 effort I
 don't want to start ripping stuff out.
 Fair enough but we need to revert, or at least stop taking patches,
 for
 https://blueprints.launchpad.net/tempest/+spec/nova-api-test-inheritance
 which is trying to make supporting two monolithic apis share code.
 We will share code for micro versions but it will be distributed and
 not based on class inheritance.
 
 
 Hrm - we'll still have pretty similar issues with microversions as we do
 with v2/v3 - eg the test code for the same api with a different
 micoversion will have a lot in common. So for test code we're probably
 back to either:
 
 - if/else inlined in tests based on the microversion mode that is
 being tested at the moment (perhaps least amount of code but cost is
 readability)
 - class inheritance (override specific bits where necessary - bit more
 code, but readbility better?).
 - duplicated tests (min sharing)

Realistically, the current approach won't scale to micro versions. We
really won't be able to have 100 directories for Nova, or a 100 class
inheritances.

When a micro version happens, it will affect a small number of
interfaces. So the important thing will be testing those interfaces
before and after that change. We'll have to be really targeted here.
Much like the way the database migration tests with data injection are.

Honestly, I think this is going to be hard to fully map until we've got
an interesting version sitting in front of us.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [relmgt] Proposed Juno release schedule

2014-05-20 Thread Thierry Carrez
Hello everyone,

At the Design Summit last week we discussed the Juno release schedule
and came up with the following proposal:

https://wiki.openstack.org/wiki/Juno_Release_Schedule

The main reported issue with it is the presence of the US labor day
weekend just before juno-3 (feature freeze) week. That said, there
aren't a lot of options there if we want to preserve 6 weeks between FF
and release. I expect that with feature freeze happening on the Thursday
rather than the Tuesday (due to the new process around milestone
tagging), it will have limited impact.

The schedule will be discussed and approved at the release meeting today
(21:00 UTC in #openstack-meeting).

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TC][Marconi][All] TC Representative for incubated projects

2014-05-20 Thread Flavio Percoco

Greetings,

After lot of talking, planing and, most importantly, the results of
Marconi's previous graduation attempt, we've been thinking about how
incubated projects can be more aligned, integrated and updated with
the TC and the governance changes.

Most of us are subscribed to the Governance gerrit project, which
updates us with the latest proposed changes. Besides that, most of us
also follow the TC meetings and/or read meeting logs as much as
possible. However, we realise that this might not be enough from a
growth perspective for incubated projects.

It's important for incubated projects to have a representative in the
TC. This person won't be sponsoring the project but guiding it from
with a TC hat on. This guidance could be translated in monthly
meetings with the project leader/team to check the project status and
next steps towards graduation.

Marconi's team has, informally, asked Devananda van der Veen to be the
representative of the project in the TC. Devananda kindly accepted the
task.

Since I believe this is useful not just for Marconi but all incubated
projects, I'd like to throw the idea out there hoping this can become
part of the growth process for newly incubated projects.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpo7p14uzTKX.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Security Groups] Pings to router ip from VM with default security groups

2014-05-20 Thread Narasimhan, Vivekanandan
Hi ,



We have been trying to understand behavior of security group rules in icehouse 
stable.



The default security group contains 4 rules, two ingress and two egress.



The two ingress rules are one for IPv4 and other for IPv6.

We see both the ingress rules use cyclic security groups, wherein the rule 
contains remote_security_group_id

the same as the security_group_id itself.



Vm1 -à  R1 ß Vm2



Vm1 20.0.0.2

R1 interface 1 - 20.0.0.1

R1 interface 2 - 30.0.0.1

Vm2 30.0.0.2



We saw that with default security groups, Vm1 can ping its DHCP Server IP 
because of provider_rule in security group rules.



Vm1 is also able to ping Vm2 via router R1, as Vm1 port and Vm2 port share the 
same security group.



However, we noticed that a Vm1 is also able to ping the router interfaces (R1 
interface 1 ip - 20.0.0.1) and also ping router

interface (R1 interface 2 IP - 30.0.0.1)  successfully.



Router interfaces donot have security groups associated with them, so the 
router interface IPs won' t get added to

the IPTables of the CN where the Vm1 resides.



We are not able to figure how the ping from the Vm1 to the router interfaces 
work when

no explicit rules are added to allow them.



Could you please throw some light on this?



--

Thanks,



Vivek



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] Status of v3 tests in tempest

2014-05-20 Thread Christopher Yeoh
On Tue, May 20, 2014 at 8:58 PM, Sean Dague s...@dague.net wrote:

 On 05/19/2014 11:49 PM, Christopher Yeoh wrote:
 
  - if/else inlined in tests based on the microversion mode that is
  being tested at the moment (perhaps least amount of code but cost is
  readability)
  - class inheritance (override specific bits where necessary - bit more
  code, but readbility better?).
  - duplicated tests (min sharing)

 Realistically, the current approach won't scale to micro versions. We
 really won't be able to have 100 directories for Nova, or a 100 class
 inheritances.

 When a micro version happens, it will affect a small number of
 interfaces. So the important thing will be testing those interfaces
 before and after that change. We'll have to be really targeted here.
 Much like the way the database migration tests with data injection are.

 Honestly, I think this is going to be hard to fully map until we've got
 an interesting version sitting in front of us.


So I agree that we won't be able to have a new directory for every
microversion. But for the v2/v3 changes
we already have a lot of typical minor changes we'll need to handle. Eg.

- a parameter that has been renamed or removed (effectively the same thing
from an API point of view)
- a success status code that has changed

Something like say a tasks API would I think be quite different because
there would be a lot less shared code for the tests and so we'll need a
different solution.

I guess what I'm saying is once we have a better idea of how the
microversion interface will work then I think doing the work to minimise
the code duplication on the tempest side is worth it because we have lots
of examples of the sorts of cases we'll need to handle.

Regards,

Chris



 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][mysql] IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE

2014-05-20 Thread Peter Boros
Hi,

I would like to shed some additional light on this for those who were
not there. So, SELECT ... FOR UPDATE does lock on a single node, as it
is pointed out earlier in this thread, a simple solution is to write
only one node at a time. Haproxy can be set up with both backends, see
this blog post for example.
http://www.mysqlperformanceblog.com/2012/06/20/percona-xtradb-cluster-reference-architecture-with-haproxy/

In a nutshell, and with a bit of an oversimplification, galera
replicates in write sets. A write set is practically a row based
binary log event + some metadata which is good for 2 things: you can
take a look at 2 write sets and tell if they are conflicting or not,
and you can take a look at a writeset and a database, and tell if the
write set is applicable to the database. At the time of commit, the
transaction is transferred to all the other cluster nodes in parallel.
On the remote node, the new transaction is compared to each other
transaction waiting in the queue to be applied, and it's checked if
it's applicable to the database. If the transaction if not
conflicting, and it's applicable, it's queued, and the node signals
back that the commit can proceed. There is a nice drawing about this
here:

http://www.percona.com/doc/percona-xtradb-cluster/5.6/features/multimaster-replication.html

So, because of this, the locks of SELECT FOR UPDATE won't replicate.
Between nodes, galera uses optimistic locking. This means that we
assume that during the certification process (described above), there
will be no conflicts. If there are conflicts, the transaction is
rolled back on the originating node, and this is when you receive the
error message in question. A failed transaction is something which can
happen any time with any database engine with any interesting
feature, and when a transaction failed, the application should now
what to do with it. In case of galera, a conflict in a single node
case was a wait on row locks, in case of galera replication, this will
be a rollback. A rollback is a much more expensive operation (data has
to be copied back from undo), so if there are lots of failures like
this, performance will suffer.
So, this is not a deadlock in the classical sense. Yet, InnoDB can
roll back a transaction any time because of a deadlock (any database
engine can do that, including PostgreSQL), and the application should
be able to handle this.

As it was noted earlier, writing to a single node only at a time is a
good solution for avoiding this. With multiple nodes written, storage
engine level writes will still happen on every node, because every
node has the whole data set. Writing on multiple nodes can be
beneficial because parsing SQL is much more expensive than just
applying a row based binary log event, so you can see some performance
improvement if all nodes are written.

I would discourage using any type of multi-master replication without
understanding how conflict resolution works in case of the chosen
solution. In case of galera, if row locks were replicated over the
network, it would act the same way as a single server, but it would be
really slow. If SELECT FOR UPDATE is only used to achieve consistent
reads (read your own writes), that can be achieved with
wsrep_causal_reads. I am happy to help to avoid SELECT FOR UPDATE if
somebody can tell me the use cases.

On Tue, May 20, 2014 at 10:53 AM, Julien Danjou jul...@danjou.info wrote:
 On Mon, May 19 2014, Jay Pipes wrote:

 I think at that point I mentioned that there were a number of places that
 were using the SELECT ... FOR UPDATE construct in Nova (in SQLAlchemy, it's
 the with_lockmode('update') modification of the query object). Peter
 promptly said that was a problem. MySQL Galera does not support SELECT ...
 FOR UPDATE, since it has no concept of cross-node locking of records and
 results are non-deterministic.

 So you send a command that's not supported and the whole software
 deadlocks? Is there a bug number about that or something? I cannot
 understand how this can be possible and considered as something normal
 (that's the feeling I have reading your mail, I may be wrong).

 We have a number of options:

 1) Stop using MySQL Galera for databases of projects that contain
 with_lockmode('update')

 2) Put a big old warning in the docs somewhere about the problem of
 potential deadlocks or odd behaviour with Galera in these projects

 3) For Nova and Neutron, remove the use of with_lockmode('update') and
 instead use a coarse-grained file lock or a distributed lock manager for
 those areas where we need deterministic reads or quiescence.

 4) For the Nova db quota driver, refactor the driver to either use a
 non-locking method for reservation and quota queries or move the driver out
 into its own projects (or use something like Climate and make sure that
 Climate uses a non-blocking algorithm for those queries...)

 Thoughts?

 5) Stop leveling down our development, and rely and leverage a powerful
 RDBMS that 

[openstack-dev] Jenkins CI job Error

2014-05-20 Thread trinath.soman...@freescale.com
Hi-

The Jenkins CI jobs posted build failed for a gerrit review.

check-tempest-dsvm-neutron-pg  in ceilometers-anotifications  
(http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg/312e9c0/)


2014-05-20 
11:13:13.088http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg/312e9c0/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_13_13_088
 15324 ERROR oslo.m 2014-05-20 11:13:13.088 15324 ERROR 
oslo.messaging.notify.dispatcher [-] Exception during message handling

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher Traceback 
(most recent call last):

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/notify/dispatcher.py, 
line 85, in _dispatch_and_handle_error

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher return 
self._dispatch(incoming.ctxt, incoming.message)

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/notify/dispatcher.py, 
line 121, in _dispatch

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher 
metadata)

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher   File 
/opt/stack/new/ceilometer/ceilometer/plugin.py, line 107, in info

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher 
self.to_samples_and_publish(context.get_admin_context(), notification)

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher   File 
/opt/stack/new/ceilometer/ceilometer/plugin.py, line 125, in 
to_samples_and_publish

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher 
p(list(self.process_notification(notification)))

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher   File 
/opt/stack/new/ceilometer/ceilometer/network/notifications.py, line 88, in 
process_notification

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher 
resource_id=message['payload']['id'],

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher KeyError: 
'id'

2014-05-20 11:13:13.088 15324 TRACE oslo.messaging.notify.dispatcher

check-tempest-dsvm-neutron-pg-2  in ceilometers-anotifications 
(http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/)


2014-05-20 
11:16:12.368http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_16_12_368
 15678 ERROR oslo.messaging.notify.dispatcher [-] Exception during message 
handling

2014-05-20 
11:16:12.368http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_16_12_368
 15678 TRACE oslo.messaging.notify.dispatcher Traceback (most recent call last):

2014-05-20 
11:16:12.368http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_16_12_368
 15678 TRACE oslo.messaging.notify.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/notify/dispatcher.py, 
line 85, in _dispatch_and_handle_error

2014-05-20 
11:16:12.368http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_16_12_368
 15678 TRACE oslo.messaging.notify.dispatcher return 
self._dispatch(incoming.ctxt, incoming.message)

2014-05-20 
11:16:12.368http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_16_12_368
 15678 TRACE oslo.messaging.notify.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/notify/dispatcher.py, 
line 121, in _dispatch

2014-05-20 
11:16:12.368http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_16_12_368
 15678 TRACE oslo.messaging.notify.dispatcher metadata)

2014-05-20 
11:16:12.368http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_16_12_368
 15678 TRACE oslo.messaging.notify.dispatcher   File 
/opt/stack/new/ceilometer/ceilometer/plugin.py, line 107, in info

2014-05-20 
11:16:12.368http://logs.openstack.org/92/78092/10/check/check-tempest-dsvm-neutron-pg-2/850d041/logs/screen-ceilometer-anotification.txt.gz?level=TRACE#_2014-05-20_11_16_12_368
 15678 TRACE oslo.messaging.notify.dispatcher 
self.to_samples_and_publish(context.get_admin_context(), notification)

2014-05-20 

Re: [openstack-dev] [sahara] Nominate Trevor McKay for sahara-core

2014-05-20 Thread Telles Nobrega
+1


On Mon, May 19, 2014 at 11:13 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Trevor, congrats!

 welcome to the sahara-core.

 On Thu, May 15, 2014 at 11:41 AM, Matthew Farrellee m...@redhat.com
 wrote:
  On 05/12/2014 05:31 PM, Sergey Lukjanov wrote:
 
  Hey folks,
 
  I'd like to nominate Trevor McKay (tmckay) for sahara-core.
 
  He is among the top reviewers of Sahara subprojects. Trevor is working
  on Sahara full time since summer 2013 and is very familiar with
  current codebase. His code contributions and reviews have demonstrated
  a good knowledge of Sahara internals. Trevor has a valuable knowledge
  of EDP part and Hadoop itself. He's working on both bugs and new
  features implementation.
 
  Some links:
 
  http://stackalytics.com/report/contribution/sahara-group/30
  http://stackalytics.com/report/contribution/sahara-group/90
  http://stackalytics.com/report/contribution/sahara-group/180
 
 
 https://review.openstack.org/#/q/owner:tmckay+sahara+AND+-status:abandoned,n,z
  https://launchpad.net/~tmckay
 
  Sahara cores, please, reply with +1/0/-1 votes.
 
  Thanks.
 
 
  +1
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Telles Mota Vidal Nobrega
Bsc in Computer Science at UFCG
Software Engineer at PulsarOpenStack Project - HP/LSD-UFCG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [UX] Is there horizon implementation for ironic

2014-05-20 Thread Jaromir Coufal

Hi,

I am currently improving UI designs for node management via Ironic based 
on the feedback from OpenStack Summit. In the Infrastructure dashboard, 
there are basic views for nodes, but at this moment it is handled via 
nova-baremetal (when we implemented these views, Ironic was not ready yet).


New views for node management are intended to work with Ironic and after 
the mockups are reviewed, we are going to work on their implementation.


I will be posting the designs by the end of this week hopefully. Any 
feedback or help with implementation will be very welcome then.


Best
-- Jarda

On 2014/19/05 09:58, 严超 wrote:

Hi, All :
Ironic is a project for us to control bare metal better. Is there any
horizon implementation for ironic to use ironic api and function easyly?

*/Best Regards!/*
*/Chao Yan
--
/**/My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727/*
*/My Weibo:http://weibo.com/herewearenow
--
/*


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [UX] Is there horizon implementation for ironic

2014-05-20 Thread 严超
cool
hope to see your design and implementation soon
On May 20, 2014 8:47 PM, Jaromir Coufal jcou...@redhat.com wrote:

 Hi,

 I am currently improving UI designs for node management via Ironic based
 on the feedback from OpenStack Summit. In the Infrastructure dashboard,
 there are basic views for nodes, but at this moment it is handled via
 nova-baremetal (when we implemented these views, Ironic was not ready yet).

 New views for node management are intended to work with Ironic and after
 the mockups are reviewed, we are going to work on their implementation.

 I will be posting the designs by the end of this week hopefully. Any
 feedback or help with implementation will be very welcome then.

 Best
 -- Jarda

 On 2014/19/05 09:58, 严超 wrote:

 Hi, All :
 Ironic is a project for us to control bare metal better. Is there any
 horizon implementation for ironic to use ironic api and function easyly?

 */Best Regards!/*
 */Chao Yan
 --
 /**/My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727/*
 */My Weibo:http://weibo.com/herewearenow
 --
 /*


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] specs blueprints for juno

2014-05-20 Thread Doug Hellmann
We agreed just before the summit that we wanted to participate in the
specs repository experiments for this cycle. The repository is set up
[1] and I've just posted a review for an updated template [2] that
includes some sections added to nova's template after we copied it and
some sections we need that other projects don't.

To keep tracking simpler, ttx and I intend to use launchpad only for
reporting and not for actually approving blueprints, so I would like
all blueprints to have a corresponding spec ASAP with 2 exceptions:
Ben has already finished graduate-config-fixture and the oslo-db-lib
work is far enough along that the *graduation* part of that doesn't
need to written up (any other pending db changes not tied to a bug
should have a spec  blueprint created).

Please look over the template review, and start thinking about the
specs for your blueprints. After the updated template lands, we'll be
ready to start reviewing the specs for all of the blueprints we plan
to work on during Juno.

Thanks!
Doug

1. http://git.openstack.org/cgit/openstack/oslo-specs
2. https://review.openstack.org/94359

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] bug triage day after summit

2014-05-20 Thread Michael McCune
I think in our eagerness to triage bugs we might have missed that May 26 is a 
holiday in the U.S.

I know some of us have the day off work and while that doesn't necessarily stop 
the effort, it might throw a wrench in people's holiday weekend plans. I'm 
wondering if we should re-evaluate and make the following day(May 27) triage 
day instead?

regards,
mike

- Original Message -
 Hey sahara folks,
 
 let's make a Bug Triage Day after the summit.
 
 I'm proposing the May, 26 for it.
 
 Any thoughts/objections?
 
 Thanks.
 
 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][mysql] IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE

2014-05-20 Thread Peter Boros
Hi,

Also it would be nice to confirm that really SELECT FOR UPDATES are
causing the deadlocks. Since these are row lock waits in a single node
case, with a slow log from a single node, pt-query-digest can help to
determine this.

pt-query-digest /path/to/slow.log --order-by InnoDB_rec_lock_wait:sum
 digest-rec_lock_waits.txt

It will show what statements waited for locks most, these will most
likely be the ones causing the deadlock issues in case of multi-node
writing.

On Tue, May 20, 2014 at 2:27 PM, Peter Boros peter.bo...@percona.com wrote:
 Hi,

 I would like to shed some additional light on this for those who were
 not there. So, SELECT ... FOR UPDATE does lock on a single node, as it
 is pointed out earlier in this thread, a simple solution is to write
 only one node at a time. Haproxy can be set up with both backends, see
 this blog post for example.
 http://www.mysqlperformanceblog.com/2012/06/20/percona-xtradb-cluster-reference-architecture-with-haproxy/

 In a nutshell, and with a bit of an oversimplification, galera
 replicates in write sets. A write set is practically a row based
 binary log event + some metadata which is good for 2 things: you can
 take a look at 2 write sets and tell if they are conflicting or not,
 and you can take a look at a writeset and a database, and tell if the
 write set is applicable to the database. At the time of commit, the
 transaction is transferred to all the other cluster nodes in parallel.
 On the remote node, the new transaction is compared to each other
 transaction waiting in the queue to be applied, and it's checked if
 it's applicable to the database. If the transaction if not
 conflicting, and it's applicable, it's queued, and the node signals
 back that the commit can proceed. There is a nice drawing about this
 here:

 http://www.percona.com/doc/percona-xtradb-cluster/5.6/features/multimaster-replication.html

 So, because of this, the locks of SELECT FOR UPDATE won't replicate.
 Between nodes, galera uses optimistic locking. This means that we
 assume that during the certification process (described above), there
 will be no conflicts. If there are conflicts, the transaction is
 rolled back on the originating node, and this is when you receive the
 error message in question. A failed transaction is something which can
 happen any time with any database engine with any interesting
 feature, and when a transaction failed, the application should now
 what to do with it. In case of galera, a conflict in a single node
 case was a wait on row locks, in case of galera replication, this will
 be a rollback. A rollback is a much more expensive operation (data has
 to be copied back from undo), so if there are lots of failures like
 this, performance will suffer.
 So, this is not a deadlock in the classical sense. Yet, InnoDB can
 roll back a transaction any time because of a deadlock (any database
 engine can do that, including PostgreSQL), and the application should
 be able to handle this.

 As it was noted earlier, writing to a single node only at a time is a
 good solution for avoiding this. With multiple nodes written, storage
 engine level writes will still happen on every node, because every
 node has the whole data set. Writing on multiple nodes can be
 beneficial because parsing SQL is much more expensive than just
 applying a row based binary log event, so you can see some performance
 improvement if all nodes are written.

 I would discourage using any type of multi-master replication without
 understanding how conflict resolution works in case of the chosen
 solution. In case of galera, if row locks were replicated over the
 network, it would act the same way as a single server, but it would be
 really slow. If SELECT FOR UPDATE is only used to achieve consistent
 reads (read your own writes), that can be achieved with
 wsrep_causal_reads. I am happy to help to avoid SELECT FOR UPDATE if
 somebody can tell me the use cases.

 On Tue, May 20, 2014 at 10:53 AM, Julien Danjou jul...@danjou.info wrote:
 On Mon, May 19 2014, Jay Pipes wrote:

 I think at that point I mentioned that there were a number of places that
 were using the SELECT ... FOR UPDATE construct in Nova (in SQLAlchemy, it's
 the with_lockmode('update') modification of the query object). Peter
 promptly said that was a problem. MySQL Galera does not support SELECT ...
 FOR UPDATE, since it has no concept of cross-node locking of records and
 results are non-deterministic.

 So you send a command that's not supported and the whole software
 deadlocks? Is there a bug number about that or something? I cannot
 understand how this can be possible and considered as something normal
 (that's the feeling I have reading your mail, I may be wrong).

 We have a number of options:

 1) Stop using MySQL Galera for databases of projects that contain
 with_lockmode('update')

 2) Put a big old warning in the docs somewhere about the problem of
 potential deadlocks or odd behaviour with Galera in 

Re: [openstack-dev] [qa][nova] Status of v3 tests in tempest

2014-05-20 Thread David Kranz

On 05/20/2014 03:19 PM, Christopher Yeoh wrote:
On Tue, May 20, 2014 at 8:58 PM, Sean Dague s...@dague.net 
mailto:s...@dague.net wrote:


On 05/19/2014 11:49 PM, Christopher Yeoh wrote:

 - if/else inlined in tests based on the microversion mode that is
 being tested at the moment (perhaps least amount of code but
cost is
 readability)
 - class inheritance (override specific bits where necessary -
bit more
 code, but readbility better?).
 - duplicated tests (min sharing)

Realistically, the current approach won't scale to micro versions. We
really won't be able to have 100 directories for Nova, or a 100 class
inheritances.

When a micro version happens, it will affect a small number of
interfaces. So the important thing will be testing those interfaces
before and after that change. We'll have to be really targeted here.
Much like the way the database migration tests with data injection
are.

Honestly, I think this is going to be hard to fully map until
we've got
an interesting version sitting in front of us.


So I agree that we won't be able to have a new directory for every 
microversion. But for the v2/v3 changes

we already have a lot of typical minor changes we'll need to handle. Eg.

- a parameter that has been renamed or removed (effectively the same 
thing from an API point of view)

- a success status code that has changed

Something like say a tasks API would I think be quite different 
because there would be a lot less shared code for the tests and so 
we'll need a different solution.


I guess what I'm saying is once we have a better idea of how the 
microversion interface will work then I think doing the work to 
minimise the code duplication on the tempest side is worth it because 
we have lots of examples of the sorts of cases we'll need to handle.


I agree. I think what Sean is saying, and this was the original intent 
of starting this thread, is that the structure we come up with for micro 
versions will look a lot different than the v2/v3 consolidation that was 
in progress in tempest when the decision to abandon v3 as a monolithic 
new api was made. So we have to stop the current changes based on a 
monolithic v2/v3, and then come up with a new organization based on 
micro versions when the nova approach has solidified sufficiently.


 -David



Regards,

Chris

-Sean

--
Sean Dague
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Security Groups] Pings to router ip from VM with default security groups

2014-05-20 Thread McCann, Jack
I think this is a combination of two things...


1. When a VM initiates outbound communications, the egress rules

allow associated return traffic.  So if you allow outbound echo

request, the return echo reply will also be allowed.



2. The router interface will respond to ping.

- Jack

From: Narasimhan, Vivekanandan
Sent: Tuesday, May 20, 2014 8:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][Security Groups] Pings to router ip from VM 
with default security groups

Hi ,

We have been trying to understand behavior of security group rules in icehouse 
stable.

The default security group contains 4 rules, two ingress and two egress.

The two ingress rules are one for IPv4 and other for IPv6.
We see both the ingress rules use cyclic security groups, wherein the rule 
contains remote_security_group_id
the same as the security_group_id itself.

Vm1 ---  R1 -- Vm2

Vm1 20.0.0.2
R1 interface 1 - 20.0.0.1
R1 interface 2 - 30.0.0.1
Vm2 30.0.0.2

We saw that with default security groups, Vm1 can ping its DHCP Server IP 
because of provider_rule in security group rules.

Vm1 is also able to ping Vm2 via router R1, as Vm1 port and Vm2 port share the 
same security group.

However, we noticed that a Vm1 is also able to ping the router interfaces (R1 
interface 1 ip - 20.0.0.1) and also ping router
interface (R1 interface 2 IP - 30.0.0.1)  successfully.

Router interfaces donot have security groups associated with them, so the 
router interface IPs won' t get added to
the IPTables of the CN where the Vm1 resides.

We are not able to figure how the ping from the Vm1 to the router interfaces 
work when
no explicit rules are added to allow them.

Could you please throw some light on this?

--
Thanks,

Vivek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Updated Object Model?

2014-05-20 Thread Kyle Mestery
On Mon, May 19, 2014 at 9:28 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:
 In the API meeting at the summit, Mark McClain mentioned that the
 existing API should be supported, but deprecated so as not to interrupt
 those using the existing API.  To me, that sounds like the object model
 can change but there needs to be some kind of adapter/translation layer
 that modifies any existing current API calls to the new object model.

 So currently there is this blueprint spec that Eugene submitted:

 https://review.openstack.org/#/c/89903/3/specs/juno/lbaas-api-and-objmodel-improvement.rst

 That is implementing the object model with VIP as root object.  I
 suppose this needs to be changed to have the changed we agreed on at the
 summit.  Also, this blueprint should also cover the layer in which to
 the existing API calls get mapped to this object model.

 My question is to anyone who knows for certain: should this blueprint
 just be changed to reflect the new object model agreed on at the summit
 or should a new blueprint spec be created?  If it should just be changed
 should it wait until Eugene gets back from vacation since he's the one
 who created this blueprint spec?

If you think it makes sense to change this existing document, I would
say we should update Eugene's spec mentioned above to reflect what was
agreed upon at the summit. I know Eugene is on vacation this week, so
in this case it may be ok for you to push a new revision of his
specification while he's out, updating it to reflect the object model
changes. This way we can make some quick progress on this front. We
won't approve this until he gets back and has a chance to review it.
Let me know if you need help in pulling this spec down and pushing a
new version.

Thanks,
Kyle

 After that, then the API change blueprint spec should be created that
 adds the /loadbalancers resource and other changes.

 If anyone else can add anything please do.  If I said anything wrong
 please correct me, and if anyone can answer my question above please do.

 Thanks,
 Brandon Logan

 On Mon, 2014-05-19 at 17:06 -0400, Susanne Balle wrote:
 Great summit!! fantastic to meeting you all in person.


 We now have agreement on the Object model. How do we turn that into
 blueprints and also how do we start making progress on the rest of the
 items we agree upon at the summit?


 Susanne


 On Fri, May 16, 2014 at 2:07 AM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 Yeah that’s a good point.  Thanks!


 From: Eugene Nikanorov enikano...@mirantis.com
 Reply-To: openstack-dev@lists.openstack.org
 openstack-dev@lists.openstack.org

 Date: Thursday, May 15, 2014 at 10:38 PM

 To: openstack-dev@lists.openstack.org
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Updated Object
 Model?



 Brandon,


 It's allowed right now just per API. It's up to a backend to
 decide the status of a node in case some monitors find it
 dead.


 Thanks,
 Eugene.




 On Fri, May 16, 2014 at 4:41 AM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 I have concerns about multiple health monitors on the
 same pool.  Is this always going to be the same type
 of health monitor?  There’s also ambiguity in the case
 where one health monitor fails and another doesn’t.
  Is it an AND or OR that determines whether the member
 is down or not?


 Thanks,
 Brandon Logan


 From: Eugene Nikanorov enikano...@mirantis.com
 Reply-To: openstack-dev@lists.openstack.org
 openstack-dev@lists.openstack.org
 Date: Thursday, May 15, 2014 at 9:55 AM
 To: openstack-dev@lists.openstack.org
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [Neutron][LBaaS] Updated
 Object Model?



 Vijay,


 Pools-monitors are still many to many, if it's not so
 on the picture - we'll fix that.
 I brought this up as an example of how we dealt with
 m:n via API.


 Thanks,
 Eugene.


 On Thu, May 15, 2014 at 6:43 PM, Vijay Venkatachalam
 vijay.venkatacha...@citrix.com wrote:
 Thanks for the clarification. Eugene.



 A tangential point since you brought healthmon
 and pool.



 There will be an additional entity called
 ‘PoolMonitorAssociation’ which results in a
 many to many relationship between pool and
 monitors. Right?



 Now, the model is indicating a 

[openstack-dev] [QA] Meeting Thursday May 22nd at 17:00UTC

2014-05-20 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
this Thursday, May 22nd at 17:00 UTC in the #openstack-meeting channel. I'm
sending the reminder out a little earlier this week because the usual meeting
cadence was interrupted by the off week and summit.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][nova] how to best deal with default periodic task spacing behavior

2014-05-20 Thread Matt Riedemann
Between patch set 1 and patch set 3 here [1] we have different solutions 
to the same issue, which is if you don't specify a spacing value for 
periodic tasks then they run whenever the periodic task processor runs, 
which is non-deterministic and can be staggered if some tasks don't 
complete in a reasonable amount of time.


I'm bringing this to the mailing list to see if there are more opinions 
out there, especially from operators, since patch set 1 changes the 
default behavior to have the spacing value be the DEFAULT_INTERVAL 
(hard-coded 60 seconds) versus patch set 3 which makes that behavior 
configurable so the admin can set global default spacing for tasks, but 
defaults to the current behavior of running every time if not specified.


I don't like a new config option, but I'm also not crazy about changing 
existing behavior without consensus.


[1] https://review.openstack.org/#/c/93767/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Devstack Multinode on CentOS

2014-05-20 Thread Henrique Truta
Hello, Sean!

I'm trying to use Nova Network instead of Neutron due to its simplicity,
that's why I didn't specify any of this on the controller.

On the compute node, I enabled n-cpu,n-net,n-api,c-sch,c-api,c-vol,
because that's what I thought were needed to become a Host... I'll try to
disable the Cinder API.

The most strange part is that I run stack.sh on the compute node, and ir
runs ok, but it doesn't create anything. Appearantly, it only uses the API
on the Controller :/


2014-05-19 18:10 GMT-03:00 Collins, Sean sean_colli...@cable.comcast.com:

 On Mon, May 19, 2014 at 05:00:26PM EDT, Henrique Truta wrote:
  Controller localrc: http://paste.openstack.org/show/80953/
 
  Compute node localrc: http://paste.openstack.org/show/80955/

 These look backwards. The first pastebin link has no enabled services,
 while the pastebin you say is the compute node appears to have API
 services running in the enabled_services list.

 So - here's an example from my lab:

 Controller localrc:

 # Nova
 disable_service n-net
 enable_service n-cpu

 # Neutron
 ENABLED_SERVICES+=,neutron,q-svc,q-dhcp,q-meta,q-agt

 Compute localrc:

 ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt


 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Ítalo Henrique Costa Truta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] bug triage day after summit

2014-05-20 Thread Sergey Lukjanov
I'm ok with moving it to May 27.

On Tuesday, May 20, 2014, Michael McCune mimcc...@redhat.com wrote:

 I think in our eagerness to triage bugs we might have missed that May 26
 is a holiday in the U.S.

 I know some of us have the day off work and while that doesn't necessarily
 stop the effort, it might throw a wrench in people's holiday weekend plans.
 I'm wondering if we should re-evaluate and make the following day(May 27)
 triage day instead?

 regards,
 mike

 - Original Message -
  Hey sahara folks,
 
  let's make a Bug Triage Day after the summit.
 
  I'm proposing the May, 26 for it.
 
  Any thoughts/objections?
 
  Thanks.
 
  --
  Sincerely yours,
  Sergey Lukjanov
  Sahara Technical Lead
  (OpenStack Data Processing)
  Mirantis Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org javascript:;
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Meeting time moving?

2014-05-20 Thread Clark, Robert Graham
Hi All,

At the summit I heard that the Barbican meeting time might be moving,
has anything been agreed? 

Cheers
-Rob



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] bug triage day after summit

2014-05-20 Thread Andrew Lazarev
I think May 26 was a random 'day after summit'. I'm Ok with May 27 too.

Andrew.


On Tue, May 20, 2014 at 10:16 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 I'm ok with moving it to May 27.

 On Tuesday, May 20, 2014, Michael McCune mimcc...@redhat.com wrote:

 I think in our eagerness to triage bugs we might have missed that May 26
 is a holiday in the U.S.

 I know some of us have the day off work and while that doesn't
 necessarily stop the effort, it might throw a wrench in people's holiday
 weekend plans. I'm wondering if we should re-evaluate and make the
 following day(May 27) triage day instead?

 regards,
 mike

 - Original Message -
  Hey sahara folks,
 
  let's make a Bug Triage Day after the summit.
 
  I'm proposing the May, 26 for it.
 
  Any thoughts/objections?
 
  Thanks.
 
  --
  Sincerely yours,
  Sergey Lukjanov
  Sahara Technical Lead
  (OpenStack Data Processing)
  Mirantis Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Devstack Multinode on CentOS

2014-05-20 Thread Sean Dague
API should only be on the controller. You only want compute services
(n-cpu, n-net, c-vol) on the computes.

You also need to set MULTI_HOST=True for nova network. Some examples
of working config at -
https://github.com/sdague/devstack-vagrant/blob/master/puppet/modules/devstack/templates/local.erb


Somewhere on my large TODO is to get this info back into the devstack
README (it used to be there).

-Sean

On 05/20/2014 10:15 AM, Henrique Truta wrote:
 Hello, Sean!
 
 I'm trying to use Nova Network instead of Neutron due to its simplicity,
 that's why I didn't specify any of this on the controller.
 
 On the compute node, I enabled n-cpu,n-net,n-api,c-sch,c-api,c-vol,
 because that's what I thought were needed to become a Host... I'll try
 to disable the Cinder API.
 
 The most strange part is that I run stack.sh on the compute node, and ir
 runs ok, but it doesn't create anything. Appearantly, it only uses the
 API on the Controller :/
 
 
 2014-05-19 18:10 GMT-03:00 Collins, Sean
 sean_colli...@cable.comcast.com mailto:sean_colli...@cable.comcast.com:
 
 On Mon, May 19, 2014 at 05:00:26PM EDT, Henrique Truta wrote:
  Controller localrc: http://paste.openstack.org/show/80953/
 
  Compute node localrc: http://paste.openstack.org/show/80955/
 
 These look backwards. The first pastebin link has no enabled services,
 while the pastebin you say is the compute node appears to have API
 services running in the enabled_services list.
 
 So - here's an example from my lab:
 
 Controller localrc:
 
 # Nova
 disable_service n-net
 enable_service n-cpu
 
 # Neutron
 ENABLED_SERVICES+=,neutron,q-svc,q-dhcp,q-meta,q-agt
 
 Compute localrc:
 
 ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
 
 
 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 --
 Ítalo Henrique Costa Truta
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [Heat][Documentation] Heat template documentation

2014-05-20 Thread Doug Hellmann
On Fri, May 16, 2014 at 2:10 PM, Gauvain Pocentek
gauvain.pocen...@objectif-libre.com wrote:
 Le 2014-05-16 17:13, Anne Gentle a écrit :

 On Thu, May 15, 2014 at 10:34 AM, Gauvain Pocentek
 gauvain.pocen...@objectif-libre.com wrote:

 Hello,

 This mail probably mainly concerns the doc team, but I guess that the
 heat team wants to know what's going on.

 We've shortly discussed the state of heat documentation with Anne Gentle
 and Andreas Jaeger yesterday, and I'd like to share what we think would be
 nice to do.

 Currently we only have a small section in the user guide that describes
 how to start a stack, but nothing documenting how to write templates. The
 heat developer doc provides a good reference, but I think it's not easy to
 use to get started.

 So the idea is to add an OpenStack Orchestration chapter in the user
 guide that would document how to use a cloud with heat, and how to write
 templates.

 I've drafted a spec to keep track of this at [0].


 I'd like to experiment a bit with converting the End User Guide to an
 easier markup to enable more contributors to it. Perhaps bringing in
 Orchestration is a good point to do this, plus it may help address the
 auto-generation Steve mentions.

 The loss would be the single sourcing of the End User Guide and Admin
 User Guide as well as loss of PDF output and loss of translation. If
 these losses are worthwhile for easier maintenance and to encourage
 contributions from more cloud consumers, then I'd like to try an
 experiment with it.


 Using RST would probably make it easier to import/include the developers'
 documentation. But I'm not sure we can afford to loose the features you
 mention. Translations for the user guides are very important I think.

Sphinx does appear to have translation support:
http://sphinx-doc.org/intl.html?highlight=translation

I've never used the feature myself, so I don't know how good the workflow is.

Sphinx will generate PDFs, though the LaTeX output is not as nice
looking as what we get now. There's also a direct-to-pdf builder that
uses rst2pdf that appears to support templates, so that might be an
easier path to producing something attractive:
http://ralsina.me/static/manual.pdf


 How would we review changes made in external repositories? The user guides
 are continuously published, this means that a change done in the heat/docs/
 dir would quite quickly land on the webserver without a doc team review. I
 completely trust the developers, but I'm not sure that this is the way to
 go.



 The experiment would be to have a new repo set up,
 openstack/user-guide and use the docs-core team as reviewers on it.
 Convert the End User Guide from DocBook to RST and build with Sphinx.
 Use the oslosphinx tempate for output. But what I don't know is if
 it's possible to build the automated output outside of the
 openstack/heat repo, does anyone have interest in doing a proof of
 concept on this?


 I'm not sure that this is possible, but I'm no RST expert.

I'm not sure this quite answers the question, but the RST directives
for auto-generating docs from code usually depend on being able to
import the code. That means heat and its dependencies would need to be
installed on the system where the build is performed. We accomplish
this in the dev doc builds by using tox, which automatically handles
the installation as part of setting up the virtualenv where the build
command runs.



 I'd also like input on the loss of features I'm describing above. Is
 this worth experimenting with?


 Starting this new book sounds like a lot of work. Right now I'm not
 convinced it's worth it.

 Gauvain


 ___
 Openstack-docs mailing list
 openstack-d...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Updated Object Model?

2014-05-20 Thread Eugene Nikanorov
Hi folks,

Agree with Kyle, you may go ahead and update the spec on review to reflect
the design discussed at the summit.

Thanks,
Eugene.


On Tue, May 20, 2014 at 6:07 PM, Kyle Mestery mest...@noironetworks.comwrote:

 On Mon, May 19, 2014 at 9:28 PM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
  In the API meeting at the summit, Mark McClain mentioned that the
  existing API should be supported, but deprecated so as not to interrupt
  those using the existing API.  To me, that sounds like the object model
  can change but there needs to be some kind of adapter/translation layer
  that modifies any existing current API calls to the new object model.
 
  So currently there is this blueprint spec that Eugene submitted:
 
 
 https://review.openstack.org/#/c/89903/3/specs/juno/lbaas-api-and-objmodel-improvement.rst
 
  That is implementing the object model with VIP as root object.  I
  suppose this needs to be changed to have the changed we agreed on at the
  summit.  Also, this blueprint should also cover the layer in which to
  the existing API calls get mapped to this object model.
 
  My question is to anyone who knows for certain: should this blueprint
  just be changed to reflect the new object model agreed on at the summit
  or should a new blueprint spec be created?  If it should just be changed
  should it wait until Eugene gets back from vacation since he's the one
  who created this blueprint spec?
 
 If you think it makes sense to change this existing document, I would
 say we should update Eugene's spec mentioned above to reflect what was
 agreed upon at the summit. I know Eugene is on vacation this week, so
 in this case it may be ok for you to push a new revision of his
 specification while he's out, updating it to reflect the object model
 changes. This way we can make some quick progress on this front. We
 won't approve this until he gets back and has a chance to review it.
 Let me know if you need help in pulling this spec down and pushing a
 new version.

 Thanks,
 Kyle

  After that, then the API change blueprint spec should be created that
  adds the /loadbalancers resource and other changes.
 
  If anyone else can add anything please do.  If I said anything wrong
  please correct me, and if anyone can answer my question above please do.
 
  Thanks,
  Brandon Logan
 
  On Mon, 2014-05-19 at 17:06 -0400, Susanne Balle wrote:
  Great summit!! fantastic to meeting you all in person.
 
 
  We now have agreement on the Object model. How do we turn that into
  blueprints and also how do we start making progress on the rest of the
  items we agree upon at the summit?
 
 
  Susanne
 
 
  On Fri, May 16, 2014 at 2:07 AM, Brandon Logan
  brandon.lo...@rackspace.com wrote:
  Yeah that’s a good point.  Thanks!
 
 
  From: Eugene Nikanorov enikano...@mirantis.com
  Reply-To: openstack-dev@lists.openstack.org
  openstack-dev@lists.openstack.org
 
  Date: Thursday, May 15, 2014 at 10:38 PM
 
  To: openstack-dev@lists.openstack.org
  openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Neutron][LBaaS] Updated Object
  Model?
 
 
 
  Brandon,
 
 
  It's allowed right now just per API. It's up to a backend to
  decide the status of a node in case some monitors find it
  dead.
 
 
  Thanks,
  Eugene.
 
 
 
 
  On Fri, May 16, 2014 at 4:41 AM, Brandon Logan
  brandon.lo...@rackspace.com wrote:
  I have concerns about multiple health monitors on the
  same pool.  Is this always going to be the same type
  of health monitor?  There’s also ambiguity in the case
  where one health monitor fails and another doesn’t.
   Is it an AND or OR that determines whether the member
  is down or not?
 
 
  Thanks,
  Brandon Logan
 
 
  From: Eugene Nikanorov enikano...@mirantis.com
  Reply-To: openstack-dev@lists.openstack.org
  openstack-dev@lists.openstack.org
  Date: Thursday, May 15, 2014 at 9:55 AM
  To: openstack-dev@lists.openstack.org
  openstack-dev@lists.openstack.org
 
  Subject: Re: [openstack-dev] [Neutron][LBaaS] Updated
  Object Model?
 
 
 
  Vijay,
 
 
  Pools-monitors are still many to many, if it's not so
  on the picture - we'll fix that.
  I brought this up as an example of how we dealt with
  m:n via API.
 
 
  Thanks,
  Eugene.
 
 
  On Thu, May 15, 2014 at 6:43 PM, Vijay Venkatachalam
  vijay.venkatacha...@citrix.com wrote:
  Thanks for the clarification. Eugene.
 
 
 
  A tangential point since you brought 

Re: [openstack-dev] Divergence of *-specs style checking

2014-05-20 Thread Alexis Lee
Yuriy Taraday said on Tue, May 20, 2014 at 01:37:29PM +0400:
 On Mon, May 19, 2014 at 8:38 PM, Alexis Lee alex...@hp.com wrote:
  Potentially the TITLES structure could
  be read from a per-project YAML file and the test itself could be drawn
  from some common area?
 
 I think you can get that data from template.rst file by parsing it and
 analyzing the tree.

Excellent suggestion!

I've raised https://review.openstack.org/94380 and
https://review.openstack.org/94381 .

Nova-specs seems the right starting place for a shared version as it's
where it all began and they have a couple of tests Neutron + TripleO
currently lack.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] bug triage day after summit

2014-05-20 Thread Chad Roberts
+1 for May 27.

- Original Message -
From: Andrew Lazarev alaza...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, May 20, 2014 10:20:58 AM
Subject: Re: [openstack-dev] [sahara] bug triage day after summit

I think May 26 was a random 'day after summit'. I'm Ok with May 27 too. 

Andrew. 


On Tue, May 20, 2014 at 10:16 AM, Sergey Lukjanov  slukja...@mirantis.com  
wrote: 


I'm ok with moving it to May 27. 

On Tuesday, May 20, 2014, Michael McCune  mimcc...@redhat.com  wrote: 


I think in our eagerness to triage bugs we might have missed that May 26 is a 
holiday in the U.S. 

I know some of us have the day off work and while that doesn't necessarily stop 
the effort, it might throw a wrench in people's holiday weekend plans. I'm 
wondering if we should re-evaluate and make the following day(May 27) triage 
day instead? 

regards, 
mike 

- Original Message - 
 Hey sahara folks, 
 
 let's make a Bug Triage Day after the summit. 
 
 I'm proposing the May, 26 for it. 
 
 Any thoughts/objections? 
 
 Thanks. 
 
 -- 
 Sincerely yours, 
 Sergey Lukjanov 
 Sahara Technical Lead 
 (OpenStack Data Processing) 
 Mirantis Inc. 
 
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


-- 
Sincerely yours, 
Sergey Lukjanov 
Sahara Technical Lead 
(OpenStack Data Processing) 
Mirantis Inc. 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: OpenStack and SELinux fixes

2014-05-20 Thread Ryan Hallisey
Hi,

Could everyone please test OpenStack+SELinux with the latest RHEL7.0 builds.  
We are running into the same avc's that have fixes released for them.

https://brewweb.devel.redhat.com/buildinfo?buildID=357647

Thank you.

Regards,
Miroslav and Ryan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] how to best deal with default periodic task spacing behavior

2014-05-20 Thread Davanum Srinivas
@Matt,

Agree, My vote would be to change existing behavior.

-- dims

On Tue, May 20, 2014 at 10:15 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:
 Between patch set 1 and patch set 3 here [1] we have different solutions to
 the same issue, which is if you don't specify a spacing value for periodic
 tasks then they run whenever the periodic task processor runs, which is
 non-deterministic and can be staggered if some tasks don't complete in a
 reasonable amount of time.

 I'm bringing this to the mailing list to see if there are more opinions out
 there, especially from operators, since patch set 1 changes the default
 behavior to have the spacing value be the DEFAULT_INTERVAL (hard-coded 60
 seconds) versus patch set 3 which makes that behavior configurable so the
 admin can set global default spacing for tasks, but defaults to the current
 behavior of running every time if not specified.

 I don't like a new config option, but I'm also not crazy about changing
 existing behavior without consensus.

 [1] https://review.openstack.org/#/c/93767/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting cancelled for today.

2014-05-20 Thread Peter Pouliot
Hi All,

We still have some people travelling after ODS.   The meeting for this week 
will be cancelled and we will resume next week.

Best,

p

Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][mysql] IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE

2014-05-20 Thread Rossella Sblendido
Please see inline.

cheers,

Rossella

On 05/20/2014 12:26 AM, Salvatore Orlando wrote:
 Some comments inline.

 Salvatore


 On 19 May 2014 20:32, sridhar basam sridhar.ba...@gmail.com
 mailto:sridhar.ba...@gmail.com wrote:




 On Mon, May 19, 2014 at 1:30 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 Stackers,

 On Friday in Atlanta, I had the pleasure of moderating the
 database session at the Ops Meetup track. We had lots of good
 discussions and heard important feedback from operators on DB
 topics.

 For the record, I would not bring this point up so publicly
 unless I believed it was a serious problem affecting a large
 segment of users. When doing an informal survey of the
 users/operators in the room at the start of the session, out
 of approximately 200 people in the room, only a single person
 was using PostgreSQL, about a dozen were using standard MySQL
 master/slave replication, and the rest were using MySQL Galera
 clustering. So, this is a real issue for a large segment of
 the operators -- or at least the ones at the session. :)


 We are one of those operators that use Galera for replicating our
 mysql databases. We used to  see issues with deadlocks when having
 multiple mysql writers in our mysql cluster. As a workaround we
 have our haproxy configuration in an active-standby configuration
 for our mysql VIP. 

 I seem to recall we had a lot of the deadlocks happen through
 Neutron. When we go through our Icehouse testing, we will redo our
 multimaster mysql setup and provide feedback on the issues we see.


 The SELECT... FOR UPDATE issue is going to be a non trivial one for
 neutron as well. Some components, like IPAM, heavily rely on it.
 However, Neutron is a lot more susceptible to deadlock problems than
 nova because it does not implement at the moment a retry mechanism.
 This is something which should be added during the Juno release cycle
 regardless of all the other enhancement currently being planned, such
 as task oriented operations. 


 thanks,
  Sridhar

  

 Peter Boros, from Percona, was able to provide some insight on
 MySQL Galera topics, and one issue came up that is likely the
 cause of a lot of heartache for operators who use MySQL Galera
 (or Percona XtraDB Cluster).

 We were discussing whether people had seen deadlock issues [1]
 when using MySQL Galera in their deployment, and were
 brainstorming on why deadlocks might be seen. I had suggested
 that perhaps Nova's use of autoincrementing primary keys may
 have been the cause. Peter pretty quickly dispatched that
 notion, saying that Galera automatically handles
 autoincrementing keys using managed
 innodb_autoincrement_increment and innodb_autoincrement_offset
 config options.

 I think at that point I mentioned that there were a number of
 places that were using the SELECT ... FOR UPDATE construct in
 Nova (in SQLAlchemy, it's the with_lockmode('update')
 modification of the query object). Peter promptly said that
 was a problem. MySQL Galera does not support SELECT ... FOR
 UPDATE, since it has no concept of cross-node locking of
 records and results are non-deterministic.

 So... what to do?

 For starters, some information on the use of with_lockmode()
 in Nova and Neutron...

 Within Nova, there are actually only a few places where
 with_lockmode('update') is used. Unfortunately, the use of
 with_lockmode('update') is in the quota code, which tends to
 wrap largish blocks of code within the Nova compute execution
 code.

 Within Neutron, however, the use of with_lockmode('update') is
 all over the place. There are 44 separate uses of it in 11
 different files.


 I will report on a separate thread on this, so that we can have an
 assessment of where locking statements are used and why.
  

 We have a number of options:


 I thin option 0 should be to rework/redesign the code, where possible,
 to avoid DB-level locking at all.

I totally agree. Is anybody already coordinating this rework? I'd like
to help. After redesigning, it is gonna be easier to make a decision
regarding a distributed lock manager.

  


 1) Stop using MySQL Galera for databases of projects that
 contain with_lockmode('update')


 This looks hideous, but I am afraid this is what all people wishing to
 deploy Icehouse should consider doing.
  


 2) Put a big old warning in the docs somewhere about the
 problem of potential deadlocks or odd behaviour with Galera in
 these projects

 3) For Nova and Neutron, remove the use of
 

[openstack-dev] [horizon] Static file handling -- followup

2014-05-20 Thread Radomir Dopieralski
Hello,

this is a followup on the design session we had at the meeting about
the handling of static files. You can see the etherpad from that session
here: https://etherpad.openstack.org/p/juno-summit-horizon-static-files


The split:

We are going to use rather uninspired, but very clear and well-known
names. The horizon (library) part is going to be named
django-horizon, and the openstack_dashboard is going to be named
horizon. We will clone the horizon repository as django-horizon
soon(ish) and start removing the unwanted files from both of them
-- this way we will preserve the history.


The JavaScript libraries unbundling:

I'm packaging all the missing libraries, except for Angular.js, as
XStatic packages:

https://pypi.python.org/pypi/XStatic-D3
https://pypi.python.org/pypi/XStatic-Hogan
https://pypi.python.org/pypi/XStatic-JSEncrypt
https://pypi.python.org/pypi/XStatic-QUnit
https://pypi.python.org/pypi/XStatic-Rickshaw
https://pypi.python.org/pypi/XStatic-Spin

There is also a patch for unbundling JQuery:
https://review.openstack.org/#/c/82516/
And the corresponding global requirements for it:
https://review.openstack.org/#/c/94337/

Once it is in, I will prepare a patch with the rest of libraries.

We will also unbundle Bootstrap, but first we need to deal with its
compilation, read below.


The style files compilation:

We are going to go with PySCSS compiler, plus django-pyscss. The
proof-of-concept patch has been rebased and updated, and is waiting
for your reviews: https://review.openstack.org/#/c/90371/
It is also waiting for adding the needed libraries to the global
requirements: https://review.openstack.org/#/c/94376/


The style files dependencies and pluggability:

Turns out that we don't have to generate a file with all the includes
after all, because django-pyscss actually solves that problem for us.
Horizon's plugins can refer to Horizon's own files easily now.


The linter and other tools:

We will be able to include the linter in the gate check without having
to explicitly depend on it in Horizon itself.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Post-summit next steps

2014-05-20 Thread Sanchez, Cristian A
Hi Rob, 
Could you please point me where the spec repo is?
Thanks

--Cristian

On 19/05/14 22:46, Robert Collins robe...@robertcollins.net wrote:

Hey everyone, it was great to see many of you at the summit - if you
were there and we didn't get time to say hello, then hopefully in
Paris we can do that ;)

I'd like everyone that ran a TripleO session to make sure the outcomes
of the session are captured into a spec in the specs repo - getting
design review from anyone that did not make it to the summit is
important before we cruise ahead. Please ping me if you don't get
prompt, effective review there - we should have very low latency for
specs reviews :).

Other than that, take a few days, get your breath back, and then onward
to Juno.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Post-summit next steps

2014-05-20 Thread Chris Jones
Hi

On 20 May 2014, at 16:25, Sanchez, Cristian A cristian.a.sanc...@intel.com 
wrote:

 Could you please point me where the spec repo is?

http://git.openstack.org/cgit/openstack/tripleo-specs/

Cheers,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Post-summit next steps

2014-05-20 Thread Charles Crouch


- Original Message -
 Hi Rob,
 Could you please point me where the spec repo is?

All the spec repos are under the standard git location:

http://git.openstack.org/cgit/openstack/

e.g.
http://git.openstack.org/cgit/openstack/tripleo-specs/

 Thanks
 
 --Cristian
 
 On 19/05/14 22:46, Robert Collins robe...@robertcollins.net wrote:
 
 Hey everyone, it was great to see many of you at the summit - if you
 were there and we didn't get time to say hello, then hopefully in
 Paris we can do that ;)
 
 I'd like everyone that ran a TripleO session to make sure the outcomes
 of the session are captured into a spec in the specs repo - getting
 design review from anyone that did not make it to the summit is
 important before we cruise ahead. Please ping me if you don't get
 prompt, effective review there - we should have very low latency for
 specs reviews :).
 
 Other than that, take a few days, get your breath back, and then onward
 to Juno.
 
 -Rob
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-20 Thread Chris Wright
* balaj...@freescale.com (balaj...@freescale.com) wrote:
  -Original Message-
  From: Kyle Mestery [mailto:mest...@noironetworks.com]
  Sent: Tuesday, May 20, 2014 12:19 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit
  
  On Mon, May 19, 2014 at 1:44 PM, Ian Wells ijw.ubu...@cack.org.uk
  wrote:
   I think the Service VM discussion resolved itself in a way that
   reduces the problem to a form of NFV - there are standing issues using
   VMs for services, orchestration is probably not a responsibility that
   lies in Neutron, and as such the importance is in identifying the
   problems with the plumbing features of Neutron that cause
   implementation difficulties.  The end result will be that VMs
   implementing tenant services and implementing NFV should be much the
   same, with the addition of offering a multitenant interface to
  Openstack users on the tenant service VM case.
  
   Geoff Arnold is dealing with the collating of information from people
   that have made the attempt to implement service VMs.  The problem
   areas should fall out of his effort.  I also suspect that the key
   points of NFV that cause problems (for instance, dealing with VLANs
   and trunking) will actually appear quite high up the service VM list as
  well.
   --
  There is a weekly meeting for the Service VM project [1], I hope some
  representatives from the NFB sub-project can make it to this meeting and
  participate there.
 [P Balaji-B37839] I agree with Kyle, so that we will have enough synch 
 between Service VM and NFV goals.

Makes good sense.  Will make sure to get someone there.

thanks,
-chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Meeting time moving?

2014-05-20 Thread Jarret Raim
We have not changed anything as of yet. The goal was to see if we could
find some times that work for Jamie, but I haven't done it yet. I'll post
something this week and we'll see if there is consensus.


Thanks,

--
Jarret Raim 
@jarretraim





On 5/20/14, 9:22 AM, Clark, Robert Graham robert.cl...@hp.com wrote:

Hi All,

At the summit I heard that the Barbican meeting time might be moving,
has anything been agreed?

Cheers
-Rob



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Use of environment variables in tripleo-incubator

2014-05-20 Thread Sullivan, Jon Paul
Hi,

There are a number of reviews[1][2] where new environment variables are being 
disliked, leading to -1 or -2 code reviews because new environment variables 
are added.  It is looking like this is becoming a policy.

If this is a policy, then could that be stated, and an alternate mechanism made 
available so that any reviews adding environment variables can use the 
replacement mechanism, please?

Otherwise, some guidelines for developers where environment variables are 
acceptable or not would equally be useful.

[1] https://review.openstack.org/85009
[2] https://review.openstack.org/85418

Thanks,
*: jonpaul.sulli...@hp.com :) Cloud Services - @hpcloud
*: +353 (91) 75 4169

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2.
Registered Number: 361933

The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-20 Thread Jay Pipes

Hi Zane, sorry for the delayed response. Comments inline.

On 05/06/2014 09:09 PM, Zane Bitter wrote:

On 05/05/14 13:40, Solly Ross wrote:

One thing that I was discussing with @jaypipes and @dansmith over
on IRC was the possibility of breaking flavors down into separate
components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
This way, you still get the control of the size of your building blocks
(e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid
exponential flavor explosion by separating out the axes.


Dimitry and I have discussed this on IRC already (no-one changed their
mind about anything as a result), but I just wanted to note here that I
think even this idea is crazy.

VMs are not allocated out of a vast global pool of resources. They're
allocated on actual machines that have physical hardware costing real
money in fixed ratios.

Here's a (very contrived) example. Say your standard compute node can
support 16 VCPUs and 64GB of RAM. You can sell a bunch of flavours:
maybe 1 VCPU + 4GB, 2 VCPU + 8GB, 4 VCPU + 16GB... c. But if (as an
extreme example) you sell a server with 1 VCPU and 64GB of RAM you have
a big problem: 15 VCPUs that nobody has paid for and you can't sell.
(Disks add a new dimension of wrongness to the problem.)


You are assuming a public cloud provider use case above. As much as I 
tend to focus on the utility cloud model, where the incentives are 
around maximizing the usage of physical hardware by packing in as many 
paying tenants into a fixed resource, this is only one domain for OpenStack.


There are, for good or bad, IT shops and telcos that frankly are willing 
to dump money into an inordinate amount of hardware -- and see that 
hardware be inefficiently used -- in order to appease the demands of 
their application customer tenants. The impulse of onboarding teams for 
these private cloud systems is to just say yes, with utter disregard 
to the overall cost efficiency of the proposed customer use cases.


If there was a simple switching mechanism that allowed a deployer to 
turn on or off this ability to allow tenants to construct specialized 
instance type configurations, then who really loses here? Public or 
utility cloud providers would simply leave the switch to its default of 
off and folks who wanted to provide this functionality to their users 
could provide it. Of course, there are clear caveats around lack of 
portability to other clouds -- but let's face it, cross-cloud 
portability has other challenges beyond this particular point ;)



The insight of flavours, which is fundamental to the whole concept of
IaaS, is that users must pay the *opportunity cost* of their resource
usage. If you allow users to opt, at their own convenience, to pay only
the actual cost of the resources they use regardless of the opportunity
cost to you, then your incentives are no longer aligned with your
customers.


Again, the above assumes a utility cloud model. Sadly, that isn't the 
only cloud model.



You'll initially be very popular with the kind of customers
who are taking advantage of you, but you'll have to hike prices across
the board to make up the cost leading to a sort of dead-sea effect. A
Gresham's Law of the cloud, if you will, where bad customers drive out
good customers.

Simply put, a cloud allowing users to define their own flavours *loses*
to one with predefined flavours 10 times out of 10.

In the above example, you just tell the customer: bad luck, you want
64GB of RAM, you buy 16 VCPUs whether you want them or not. It can't
actually hurt to get _more_ than you wanted, even though you'd rather
not pay for it (provided, of course, that everyone else *is* paying for
it, and cross-subsidising you... which they won't).

Now, it's not the OpenStack project's job to prevent operators from
going bankrupt. But I think at the point where we are adding significant
complexity to the project just to enable people to confirm the
effectiveness of a very obviously infallible strategy for losing large
amounts of money, it's time to draw a line.


Actually, we're not proposing something more complex, IMO.

What I've been discussing on IRC and other places is getting rid of the 
concept of flavours entirely except for in user interfaces, as an easy 
way of templatizing the creation of instances. Once an instance is 
launched, I've proposed that we don't store the instance_type_id with 
the instance any more. Right now, we store the memory, CPU, and root 
disk amounts in the instances table, so besides the instance_type 
extra_specs information, there is currently no need to keep the concept 
of an instance_type around after the instance launch sequence has been 
initiated. The instance_type is decomposed into its resource units and 
those resource units are used for scheduling decisions, not the flavour 
itself. In this way, an instance_type is nothing more than a UI template 
to make instance creation a bit easier.


The problem to date is that 

Re: [openstack-dev] Chalenges with highly available service VMs

2014-05-20 Thread Aaron Rosen
Hi Praveen,

I think we should fix the update_method instead to properly check for this.
I don't see any advantage to allow the fixed_ips/mac to be in the
allowed_address_pairs since they are explicitly allowed. What's your
motivation for changing this?

Aaron


On Mon, May 19, 2014 at 4:05 PM, Praveen Yalagandula 
yprav...@avinetworks.com wrote:

 Hi Aaron,

 Thanks for the prompt response.

 If the overlap does not have any negative effect, can we please just
 remove this check? It creates confusion as there are certain code paths
 where we do not perform this check. For example, the current code does NOT
 perform this check when we are updating the list of allowed-address-pairs
 -- I can successfully assign an existing fixed IP address to the
 allowed-address-pairs. The check is being performed on only one code path -
 when assigning fixed IPs.

 If it sounds right to you, I can submit my patch removing this check.

 Thanks,
 Praveen



 On Mon, May 19, 2014 at 12:32 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Hi,

 Sure, if you look at this method:

 def _check_fixed_ips_and_address_pairs_no_overlap(self, context,
 port):
 address_pairs = self.get_allowed_address_pairs(context,
 port['id'])
 for fixed_ip in port['fixed_ips']:

 for address_pair in address_pairs:

 if (fixed_ip['ip_address'] == address_pair['ip_address']

 and port['mac_address'] ==
 address_pair['mac_address']):
 raise addr_pair.AddressPairMatchesPortFixedIPAndMac()




 it checks that the allowed_address_pairs don't overlap with fixed_ips and
 mac_address on the port. The only reason we do this additional check is
 that having the same fixed_ip and mac_address pair as an
 allowed_address_pair would have no effect since the fixed_ip/mac on the
 port inherently allows that traffic through.

 Best,

 Aaron



 On Mon, May 19, 2014 at 12:22 PM, Praveen Yalagandula 
 yprav...@avinetworks.com wrote:

 Hi Aaron,

 In OVS and ML2 plugins, on port-update, there is a check to make sure
 that allowed-address-pairs and fixed-ips don't overlap. Can you please
 explain why that is needed?

 - icehouse final: neutron/plugins/ml2/plugin.py 

 677 elif changed_fixed_ips:

 678 self._check_fixed_ips_and_address_pairs_no_overlap(

 679 context, updated_port)
 ---

 Thanks,
 Praveen


 On Wed, Jul 17, 2013 at 3:45 PM, Aaron Rosen aro...@nicira.com wrote:

 Hi Ian,

 For shared networks if the network is set to port_security_enabled=True
 then the tenant will not be able to remove port_security_enabled from their
 port if they are not the owner of the network. I believe this is the
 correct behavior we want. In addition, only admin's are able to create
 shared networks by default.

 I've created the following blueprint
 https://blueprints.launchpad.net/neutron/+spec/allowed-address-pairsand 
 doc:
 https://docs.google.com/document/d/1hyB3dIkRF623JlUsvtQFo9fCKLsy0gN8Jf6SWnqbWWA/edit?usp=sharingwhich
  will provide us a way to do this. It would be awesome if you could
 check it out and let me know what you think.

 Thanks,

 Aaron


 On Tue, Jul 16, 2013 at 10:34 AM, Ian Wells ijw.ubu...@cack.org.ukwrote:

 On 10 July 2013 21:14, Vishvananda Ishaya vishvana...@gmail.com
 wrote:
  It used to be essential back when we had nova-network and all
 tenants
  ended up on one network.  It became less useful when tenants could
  create their own networks and could use them as they saw fit.
 
  It's still got its uses - for instance, it's nice that the metadata
  server can be sure that a request is really coming from where it
  claims - but I would very much like it to be possible to, as an
  option, explicitly disable antispoof - perhaps on a per-network
 basis
  at network creation time - and I think we could do this without
  breaking the security model beyond all hope of usefulness.
 
  Per network and per port makes sense.
 
  After all, this is conceptually the same as enabling or disabling
  port security on your switch.

 Bit late on the reply to this, but I think we should be specific on
 the network, at least at creation time, on what disabling is allowed
 at port level (default off, may be off, must be on as now).  Yes, it's
 exactly like disabling port security, and you're not always the
 administrator of your own switch; if we extend the analogy you
 probably wouldn't necessarily want people turning antispoof off on an
 explicitly shared-tenant network.
 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 

Re: [openstack-dev] [nova][neutron][mysql] IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE

2014-05-20 Thread Jay Pipes

On 05/20/2014 04:53 AM, Julien Danjou wrote:

On Mon, May 19 2014, Jay Pipes wrote:


I think at that point I mentioned that there were a number of places that
were using the SELECT ... FOR UPDATE construct in Nova (in SQLAlchemy, it's
the with_lockmode('update') modification of the query object). Peter
promptly said that was a problem. MySQL Galera does not support SELECT ...
FOR UPDATE, since it has no concept of cross-node locking of records and
results are non-deterministic.


So you send a command that's not supported and the whole software
deadlocks? Is there a bug number about that or something? I cannot
understand how this can be possible and considered as something normal
(that's the feeling I have reading your mail, I may be wrong).


Yes, you entirely misread the email.

The whole system does not deadlock -- in fact, it's not even a deadlock 
that is causing the problem, as you might have known if you read the 
email. The error is called a deadlock but it's actually a timeout 
failure to certify the working set, which is different from a deadlock.



We have a number of options:

1) Stop using MySQL Galera for databases of projects that contain
with_lockmode('update')

2) Put a big old warning in the docs somewhere about the problem of
potential deadlocks or odd behaviour with Galera in these projects

3) For Nova and Neutron, remove the use of with_lockmode('update') and
instead use a coarse-grained file lock or a distributed lock manager for
those areas where we need deterministic reads or quiescence.

4) For the Nova db quota driver, refactor the driver to either use a
non-locking method for reservation and quota queries or move the driver out
into its own projects (or use something like Climate and make sure that
Climate uses a non-blocking algorithm for those queries...)

Thoughts?


5) Stop leveling down our development, and rely and leverage a powerful
RDBMS that provides interesting feature, such as PostgreSQL.


For the record, there's nothing about this that affects PostgreSQL 
deployments. There's also little in the PostgreSQL community that will 
help anyone with write load balancing nor anything in the PostgreSQL 
community that supports the kinds of things that MySQL Galera supports 
-- synchronous working-set replication.


So, instead of being a snarky person that thinks anything that doesn't 
use PostgreSQL is worthless, how about just letting those of us who work 
with multiple DBs talk about solving a problem.



Sorry, had to say it, but it's pissing me off to see the low quality of
the work that is done around SQL in OpenStack.


Hmm, this coming from Ceilometer...

-jay




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Updated Object Model?

2014-05-20 Thread Brandon Logan
Thanks Kyle and Eugene.

I can do this if no one else wants to.  If someone really wants to do this then 
let me know and I’ll gladly give it up.  Just let me know soon.  I just want to 
get this done ASAP.

Thanks,
Brandon

From: Eugene Nikanorov enikano...@mirantis.commailto:enikano...@mirantis.com
Reply-To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, May 20, 2014 at 9:35 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Updated Object Model?

Hi folks,

Agree with Kyle, you may go ahead and update the spec on review to reflect the 
design discussed at the summit.

Thanks,
Eugene.


On Tue, May 20, 2014 at 6:07 PM, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:
On Mon, May 19, 2014 at 9:28 PM, Brandon Logan
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
 In the API meeting at the summit, Mark McClain mentioned that the
 existing API should be supported, but deprecated so as not to interrupt
 those using the existing API.  To me, that sounds like the object model
 can change but there needs to be some kind of adapter/translation layer
 that modifies any existing current API calls to the new object model.

 So currently there is this blueprint spec that Eugene submitted:

 https://review.openstack.org/#/c/89903/3/specs/juno/lbaas-api-and-objmodel-improvement.rst

 That is implementing the object model with VIP as root object.  I
 suppose this needs to be changed to have the changed we agreed on at the
 summit.  Also, this blueprint should also cover the layer in which to
 the existing API calls get mapped to this object model.

 My question is to anyone who knows for certain: should this blueprint
 just be changed to reflect the new object model agreed on at the summit
 or should a new blueprint spec be created?  If it should just be changed
 should it wait until Eugene gets back from vacation since he's the one
 who created this blueprint spec?

If you think it makes sense to change this existing document, I would
say we should update Eugene's spec mentioned above to reflect what was
agreed upon at the summit. I know Eugene is on vacation this week, so
in this case it may be ok for you to push a new revision of his
specification while he's out, updating it to reflect the object model
changes. This way we can make some quick progress on this front. We
won't approve this until he gets back and has a chance to review it.
Let me know if you need help in pulling this spec down and pushing a
new version.

Thanks,
Kyle

 After that, then the API change blueprint spec should be created that
 adds the /loadbalancers resource and other changes.

 If anyone else can add anything please do.  If I said anything wrong
 please correct me, and if anyone can answer my question above please do.

 Thanks,
 Brandon Logan

 On Mon, 2014-05-19 at 17:06 -0400, Susanne Balle wrote:
 Great summit!! fantastic to meeting you all in person.


 We now have agreement on the Object model. How do we turn that into
 blueprints and also how do we start making progress on the rest of the
 items we agree upon at the summit?


 Susanne


 On Fri, May 16, 2014 at 2:07 AM, Brandon Logan
 brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
 Yeah that’s a good point.  Thanks!


 From: Eugene Nikanorov 
 enikano...@mirantis.commailto:enikano...@mirantis.com
 Reply-To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org

 Date: Thursday, May 15, 2014 at 10:38 PM

 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Updated Object
 Model?



 Brandon,


 It's allowed right now just per API. It's up to a backend to
 decide the status of a node in case some monitors find it
 dead.


 Thanks,
 Eugene.




 On Fri, May 16, 2014 at 4:41 AM, Brandon Logan
 brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com 
 wrote:
 I have concerns about multiple health monitors on the
 same pool.  Is this always going to be the same type
 of health monitor?  There’s also ambiguity in the case
 where one health monitor fails and another doesn’t.
  Is it an AND or OR that determines whether the member
 is down or not?


 Thanks,
 Brandon Logan


 From: Eugene Nikanorov 
 

Re: [openstack-dev] [Ironic] - Integration with neutron using external attachment point

2014-05-20 Thread Devananda van der Veen
Hi Kevin!

I had a few conversations with folks at the summit regarding this. Broadly
speaking, yes -- this integration would be very helpful for both discovery
and network/tenant isolation at the bare metal layer.

I've left a few comments inline


On Mon, May 19, 2014 at 3:52 PM, Kevin Benton blak...@gmail.com wrote:

 Hello,

 I am working on an extension for neutron to allow external attachment
 point information to be stored and used by backend plugins/drivers to place
 switch ports into neutron networks[1].

 One of the primary use cases is to integrate ironic with neutron. The
 basic workflow is that ironic will create the external attachment points
 when servers are initially installed.


This also should account for servers that are already racked, which Ironic
is instructed to manage. These servers would be booted into a discovery
state, eg. running ironic-python-agent, and hardware information
(inventory, LLDP data, etc) could be sent back to Ironic.

To do this, nodes not yet registered with Ironic will need to be PXE booted
on a common management LAN (either untagged VLAN or a specific management
VLAN), which can route HTTP(S) and TFTP traffic to an instance of
ironic-api and ironic-conductor services. How will the routing be done by
Neutron for unknown ports?


 This step could either be automated (extract switch-ID and port number of
 LLDP message) or it could be manually performed by an admin who notes the
 ports a server is plugged into.


Ironic could extract info from LLDP if the machine has booted into the
ironic-python-agent ramdisk and is able to communicate with Ironic
services. So it needs to be networked /before/ it's enrolled with Ironic.
If that's possible -- great. I believe this is the workflow that the IPA
team intends to follow.

Setting it manually should also, of course, be possible, but less
manageable with large numbers of servers.



 Then when an instance is chosen for assignment and the neutron port needs
 to be created, the creation request would reference the corresponding
 attachment ID and neutron would configure the physical switch port to place
 the port on the appropriate neutron network.


Implementation question here -- today, Nova does the network attachment for
instances (or at least, Nova initiates the calls out to Neutron). Ironic
can expose this information to Nova and allow Nova to coordinate with
Neutron, or Ironic can simply call out to Neutron, as it does today when
setting the dhcp extra options. I'm not sure which approach is better.


Cheers,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova default quotas

2014-05-20 Thread Cazzolato, Sergio J
I would to hear your thoughts about an idea to add a way to manage the default 
quota values through the API. 

The idea is to use the current quota api, but sending ''default' instead of the 
tenant_id. This change would apply to quota-show and quota-update methods.

This approach will help to simplify the implementation of another blueprint 
named per-flavor-quotas

Feedback? Suggestions?


Sergio Juan Cazzolato
Intel Software Argentina 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][mysql] IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE

2014-05-20 Thread Jay Pipes

On 05/19/2014 02:32 PM, sridhar basam wrote:

On Mon, May 19, 2014 at 1:30 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

Stackers,

On Friday in Atlanta, I had the pleasure of moderating the database
session at the Ops Meetup track. We had lots of good discussions and
heard important feedback from operators on DB topics.

For the record, I would not bring this point up so publicly unless I
believed it was a serious problem affecting a large segment of
users. When doing an informal survey of the users/operators in the
room at the start of the session, out of approximately 200 people in
the room, only a single person was using PostgreSQL, about a dozen
were using standard MySQL master/slave replication, and the rest
were using MySQL Galera clustering. So, this is a real issue for a
large segment of the operators -- or at least the ones at the
session. :)


​We are one of those operators that use Galera for replicating our mysql
databases. We used to  see issues with deadlocks when having multiple
mysql writers in our mysql cluster. As a workaround we have our haproxy
configuration in an active-standby configuration for our mysql VIP.

I seem to recall we had a lot of the deadlocks happen through Neutron.
When we go through our Icehouse testing, we will redo our multimaster
mysql setup and provide feedback on the issues we see.


Thanks very much, Sridar, much appreciated.

This issue was raised at the Neutron IRC meeting yesterday, and we've 
agreed to take a staged approach. We will first work on documentation to 
add to the operations guide that explains the issues (and the tradeoffs 
of going to a single-writer cluster configuration vs. just having the 
clients retry some request). Later stages will work on a non-locking 
quota-management algorithm, possibly in conjunction with Climate, and 
looking into how to use coarser-grained file locks or a distributed lock 
manager for handling cross-component deterministic reads in Neutron.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][mysql] IMPORTANT: MySQL Galera does *not* support SELECT ... FOR UPDATE

2014-05-20 Thread pcrews

On 05/20/2014 10:07 AM, Jay Pipes wrote:

On 05/19/2014 02:32 PM, sridhar basam wrote:

On Mon, May 19, 2014 at 1:30 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

Stackers,

On Friday in Atlanta, I had the pleasure of moderating the database
session at the Ops Meetup track. We had lots of good discussions and
heard important feedback from operators on DB topics.

For the record, I would not bring this point up so publicly unless I
believed it was a serious problem affecting a large segment of
users. When doing an informal survey of the users/operators in the
room at the start of the session, out of approximately 200 people in
the room, only a single person was using PostgreSQL, about a dozen
were using standard MySQL master/slave replication, and the rest
were using MySQL Galera clustering. So, this is a real issue for a
large segment of the operators -- or at least the ones at the
session. :)


​We are one of those operators that use Galera for replicating our mysql
databases. We used to  see issues with deadlocks when having multiple
mysql writers in our mysql cluster. As a workaround we have our haproxy
configuration in an active-standby configuration for our mysql VIP.

I seem to recall we had a lot of the deadlocks happen through Neutron.
When we go through our Icehouse testing, we will redo our multimaster
mysql setup and provide feedback on the issues we see.


Thanks very much, Sridar, much appreciated.

This issue was raised at the Neutron IRC meeting yesterday, and we've
agreed to take a staged approach. We will first work on documentation to
add to the operations guide that explains the issues (and the tradeoffs
of going to a single-writer cluster configuration vs. just having the
clients retry some request). Later stages will work on a non-locking
quota-management algorithm, possibly in conjunction with Climate, and
looking into how to use coarser-grained file locks or a distributed lock
manager for handling cross-component deterministic reads in Neutron.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Am late to this topic, but wanted to share this in case anyone wanted to 
read further on this behavior with galera - 
http://www.mysqlperformanceblog.com/2012/08/17/percona-xtradb-cluster-multi-node-writing-and-unexpected-deadlocks/


--
patrick


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Post-summit next steps

2014-05-20 Thread Sanchez, Cristian A
Thanks

On 20/05/14 12:43, Chris Jones c...@tenshu.net wrote:

Hi

On 20 May 2014, at 16:25, Sanchez, Cristian A
cristian.a.sanc...@intel.com wrote:

 Could you please point me where the spec repo is?

http://git.openstack.org/cgit/openstack/tripleo-specs/

Cheers,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] bug triage day after summit

2014-05-20 Thread Sergey Lukjanov
I've proposed May 26 initially because it's a first day after my vacation :)

If there will be no objections, we're moving bug triage day to May 27.

Thanks.

On Tuesday, May 20, 2014, Chad Roberts crobe...@redhat.com wrote:

 +1 for May 27.

 - Original Message -
 From: Andrew Lazarev alaza...@mirantis.com javascript:;
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org javascript:;
 Sent: Tuesday, May 20, 2014 10:20:58 AM
 Subject: Re: [openstack-dev] [sahara] bug triage day after summit

 I think May 26 was a random 'day after summit'. I'm Ok with May 27 too.

 Andrew.


 On Tue, May 20, 2014 at 10:16 AM, Sergey Lukjanov  
 slukja...@mirantis.comjavascript:; wrote:


 I'm ok with moving it to May 27.

 On Tuesday, May 20, 2014, Michael McCune  mimcc...@redhat.comjavascript:; 
 wrote:


 I think in our eagerness to triage bugs we might have missed that May 26
 is a holiday in the U.S.

 I know some of us have the day off work and while that doesn't necessarily
 stop the effort, it might throw a wrench in people's holiday weekend plans.
 I'm wondering if we should re-evaluate and make the following day(May 27)
 triage day instead?

 regards,
 mike

 - Original Message -
  Hey sahara folks,
 
  let's make a Bug Triage Day after the summit.
 
  I'm proposing the May, 26 for it.
 
  Any thoughts/objections?
 
  Thanks.
 
  --
  Sincerely yours,
  Sergey Lukjanov
  Sahara Technical Lead
  (OpenStack Data Processing)
  Mirantis Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org javascript:;
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Chalenges with highly available service VMs

2014-05-20 Thread Praveen Yalagandula
Hi Aaron,

The main motivation is simplicity. Consider the case where we want to allow
ip cidr 10.10.1.0/24 to be allowed on a port which has a fixed IP of
10.10.1.1. Now if we do not want to allow overlapping, then one needs to
add 8 cidrs to get around this - (10.10.1.128/25, 10.10.1.64/26,
10.10.1.32/27, 10.10.1.0/32); which makes it cumbersome.

In any case, allowed-address-pairs is ADDING on to what is allowed because
of the fixed IPs. So, there is no possibility of conflict. The check will
probably make sense if we are maintaining denied addresses instead of
allowed addresses.

Cheers,
Praveen


On Tue, May 20, 2014 at 9:34 AM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi Praveen,

 I think we should fix the update_method instead to properly check for
 this. I don't see any advantage to allow the fixed_ips/mac to be in the
 allowed_address_pairs since they are explicitly allowed. What's your
 motivation for changing this?

 Aaron


 On Mon, May 19, 2014 at 4:05 PM, Praveen Yalagandula 
 yprav...@avinetworks.com wrote:

 Hi Aaron,

 Thanks for the prompt response.

 If the overlap does not have any negative effect, can we please just
 remove this check? It creates confusion as there are certain code paths
 where we do not perform this check. For example, the current code does NOT
 perform this check when we are updating the list of allowed-address-pairs
 -- I can successfully assign an existing fixed IP address to the
 allowed-address-pairs. The check is being performed on only one code path -
 when assigning fixed IPs.

 If it sounds right to you, I can submit my patch removing this check.

 Thanks,
 Praveen



 On Mon, May 19, 2014 at 12:32 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Hi,

 Sure, if you look at this method:

 def _check_fixed_ips_and_address_pairs_no_overlap(self, context,
 port):
 address_pairs = self.get_allowed_address_pairs(context,
 port['id'])
 for fixed_ip in port['fixed_ips']:

 for address_pair in address_pairs:

 if (fixed_ip['ip_address'] == address_pair['ip_address']

 and port['mac_address'] ==
 address_pair['mac_address']):
 raise
 addr_pair.AddressPairMatchesPortFixedIPAndMac()



 it checks that the allowed_address_pairs don't overlap with fixed_ips
 and mac_address on the port. The only reason we do this additional check is
 that having the same fixed_ip and mac_address pair as an
 allowed_address_pair would have no effect since the fixed_ip/mac on the
 port inherently allows that traffic through.

 Best,

 Aaron



 On Mon, May 19, 2014 at 12:22 PM, Praveen Yalagandula 
 yprav...@avinetworks.com wrote:

 Hi Aaron,

 In OVS and ML2 plugins, on port-update, there is a check to make sure
 that allowed-address-pairs and fixed-ips don't overlap. Can you please
 explain why that is needed?

 - icehouse final: neutron/plugins/ml2/plugin.py 

 677 elif changed_fixed_ips:

 678 self._check_fixed_ips_and_address_pairs_no_overlap(

 679 context, updated_port)
 ---

 Thanks,
 Praveen


 On Wed, Jul 17, 2013 at 3:45 PM, Aaron Rosen aro...@nicira.com wrote:

 Hi Ian,

 For shared networks if the network is set to
 port_security_enabled=True then the tenant will not be able to remove
 port_security_enabled from their port if they are not the owner of the
 network. I believe this is the correct behavior we want. In addition, only
 admin's are able to create shared networks by default.

 I've created the following blueprint
 https://blueprints.launchpad.net/neutron/+spec/allowed-address-pairsand 
 doc:
 https://docs.google.com/document/d/1hyB3dIkRF623JlUsvtQFo9fCKLsy0gN8Jf6SWnqbWWA/edit?usp=sharingwhich
  will provide us a way to do this. It would be awesome if you could
 check it out and let me know what you think.

 Thanks,

 Aaron


 On Tue, Jul 16, 2013 at 10:34 AM, Ian Wells ijw.ubu...@cack.org.ukwrote:

 On 10 July 2013 21:14, Vishvananda Ishaya vishvana...@gmail.com
 wrote:
  It used to be essential back when we had nova-network and all
 tenants
  ended up on one network.  It became less useful when tenants could
  create their own networks and could use them as they saw fit.
 
  It's still got its uses - for instance, it's nice that the metadata
  server can be sure that a request is really coming from where it
  claims - but I would very much like it to be possible to, as an
  option, explicitly disable antispoof - perhaps on a per-network
 basis
  at network creation time - and I think we could do this without
  breaking the security model beyond all hope of usefulness.
 
  Per network and per port makes sense.
 
  After all, this is conceptually the same as enabling or disabling
  port security on your switch.

 Bit late on the reply to this, but I think we should be specific on
 the network, at least at creation time, on what disabling is allowed
 at port level (default off, may be off, 

Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-05-20 Thread Bartosz Górski

Hi Tim,

Maybe instead of just a flag like --nested (bool value) to resource-list 
we can add optional argument like --depth X or --nested-level X (X - 
integer value) to limit the depth for recursive listing of nested resources?


Best,
Bartosz

On 05/19/2014 09:13 PM, Tim Schnell wrote:

Blueprint:
https://blueprints.launchpad.net/heat/+spec/explode-nested-resources

Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list

Tim

On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:


On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:



On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
wrote:


On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:

Hi Nilakhya,

As Randall mentioned we did discuss this exact issue at the summit. I
was
planning on putting a blueprint together today to continue the
discussion.
The Stack Preview call is already doing the necessary recursion to
gather
the resources so we discussed being able to pass a stack id to the
preview
endpoint to get all of the resources.

However, after thinking about it some more, I agree with Randall that
maybe this should be an extra query parameter passed to the
resource-list
call. I'Ll have the blueprint up later today, unless you have already
started on it.

Note there is a patch from Anderson/Richard which may help with this:

https://review.openstack.org/#/c/85781/

The idea was to enable easier introspection of resources backed by
nested
stacks in a UI, but it could be equally useful to generate a tree
resource view in the CLI client by walking the links.

This would obviously be less efficient than recursing inside the
engine,
but arguably the output would be much more useful if it retains the
nesting
structure, as opposed to presenting a fully flattened soup of
resources
with no idea which stack/layer they belong to.

Steve

Could we simply add stack name/id to this output if the flag is passed? I
agree that we currently have the capability to traverse the tree
structure of nested stacks, but several folks have requested this
capability, mostly for UI/UX purposes. It would be faster if you want the
flat structure and we still retain the capability to create your own
tree/widget/whatever by following the links. Also, I think its best to
include this in the API directly since not all users are integrating
using the python-heatclient.

+1 for adding the stack name/id to the output to maintain a reference to
the initial stack that the resource belongs to. The original stated
use-case that I am aware of was to have a flat list of all resources
associated with a stack to be displayed in the UI when the user prompts to
delete a stack. This would prevent confusion about what and why different
resources are being deleted due to the stack delete.

This use-case does not require any information about the nested stacks but
I can foresee that information being useful in the future. I think a
flattened data structure (with a reference to stack id) is still the most
efficient solution. The patch landed by Anderson/Richard provides an
alternate method to drill down into nested stacks if the hierarchy is
important information though this is not the optimal solution in this
case.

Tim


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Meeting minutes for 5/20/2014

2014-05-20 Thread Collins, Sean
http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-05-20-14.00.html

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Privacy extension

2014-05-20 Thread Shixiong Shang
Awesome! Seems like we reached agreement for not covering privacy extension at 
this moment. I am totally fine with that. To put closure on this subject, do 
you think we need to document it and provide user with work-around in case 
somebody asks for it in Juno release?

Shixiong




On May 16, 2014, at 3:29 PM, Robert Li (baoli) ba...@cisco.com wrote:

 Dane put some notes on the session??s ether pad  to support multiple 
 prefixes. Seem like this is really something that everyone want to support in 
 openstack.
 
 ??Robert
 
 On 5/16/14, 2:23 PM, Martinx - ?` thiagocmarti...@gmail.com wrote:
 
 Precisely Anthony! We talked about this topic (Non-NAT Floating IPv6) 
 here, on the following thread:
 
 --
 [openstack-dev] [Neutron][IPv6] Idea: Floating IPv6 - Without any kind of 
 NAT:
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/026871.html
 --
 
 :-D
 
 About IPv6 Privacy Extensions, well, if it is too hard to implement, I think 
 that it can be postponed... And only the IPv6 self-generated by SLAAC and 
 previously calculated by Neutron itself (based on Instance's MAC address), 
 should be allowed to pass/work for now...
 
 -
  Thiago
 
 
 On 16 May 2014 12:12, Veiga, Anthony anthony_ve...@cable.comcast.com wrote:
 I??ll take this one a step further.  I think one of the methods for getting 
 (non-NAT) floating IPs in IPv6 would be to push a new, extra address to the 
 same port.  Either by crafting an extra, unicast RA to the specific VM or 
 providing multiple IA_NA fields in the DHCPv6 transaction.  This would 
 require multiple addresses to be allowed on a single MAC.
 -Anthony
 
 From: Martinx - ?` thiagocmarti...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, May 15, 2014 at 14:18 
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][IPv6] Privacy extension
 
 Hello!
 
 I agree that there is no need for Privacy Extensions in a Cloud 
 environment, since MAC address are fake... No big deal...
 
 Nevertheless, I think that should be nice to allow 1 Instance to have more 
 than 1 IPv6 addr, since IPv6 is (almost) virtually unlimited... This way, 
 a VM with, for example, a range of IPv6s to it, can have a shared host 
 environment when each website have its own IPv6 address (I prefer to use 
 IP-Based virtualhosts on Apache, instead of Name-Based)...
 
 Cheers!
 Thiago
 
 
 On 15 May 2014 14:22, Ian Wells ijw.ubu...@cack.org.uk wrote:
 I was just about to respond to that in the session when we ran out of 
 time.  I would vote for simply insisting that VMs run without the privacy 
 extension enabled, and only permitting the expected ipv6 address based on 
 MAC.  Its primary purpose is to conceal your MAC address so that your IP 
 address can't be used to track you, as I understand it, and I don't think 
 that's as relevant in a cloud environment and where the MAC addresses are 
 basically fake.  Someone interested in desktop virtualisation with 
 Openstack may wish to contradict me...
 -- 
 Ian.
 
 
 On 15 May 2014 09:30, Shixiong Shang sparkofwisdom.cl...@gmail.com 
 wrote:
 Hi, guys:
 
 Nice to meet with all of you in the technical session and design 
 session. I mentioned the challenge of privacy extension in the meeting, 
 but would like to hear your opinions of how to address the problem. If 
 you have any comments or suggestions, please let me know. I will create 
 a BP for this problem.
 
 Thanks!
 
 Shixiong
 
 
 Shixiong Shang
 
 !--- Stay Hungry, Stay Foolish ---!
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-05-20 Thread Randall Burt
Bartosz, would that be in addition to --nested? Seems like id want to be able 
to say all of it as well as some of it.

On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
 wrote:

 Hi Tim,
 
 Maybe instead of just a flag like --nested (bool value) to resource-list we 
 can add optional argument like --depth X or --nested-level X (X - integer 
 value) to limit the depth for recursive listing of nested resources?
 
 Best,
 Bartosz
 
 On 05/19/2014 09:13 PM, Tim Schnell wrote:
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
 Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
 Tim
 
 On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
 On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:
 
 
 On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
 Hi Nilakhya,
 
 As Randall mentioned we did discuss this exact issue at the summit. I
 was
 planning on putting a blueprint together today to continue the
 discussion.
 The Stack Preview call is already doing the necessary recursion to
 gather
 the resources so we discussed being able to pass a stack id to the
 preview
 endpoint to get all of the resources.
 
 However, after thinking about it some more, I agree with Randall that
 maybe this should be an extra query parameter passed to the
 resource-list
 call. I'Ll have the blueprint up later today, unless you have already
 started on it.
 Note there is a patch from Anderson/Richard which may help with this:
 
 https://review.openstack.org/#/c/85781/
 
 The idea was to enable easier introspection of resources backed by
 nested
 stacks in a UI, but it could be equally useful to generate a tree
 resource view in the CLI client by walking the links.
 
 This would obviously be less efficient than recursing inside the
 engine,
 but arguably the output would be much more useful if it retains the
 nesting
 structure, as opposed to presenting a fully flattened soup of
 resources
 with no idea which stack/layer they belong to.
 
 Steve
 Could we simply add stack name/id to this output if the flag is passed? I
 agree that we currently have the capability to traverse the tree
 structure of nested stacks, but several folks have requested this
 capability, mostly for UI/UX purposes. It would be faster if you want the
 flat structure and we still retain the capability to create your own
 tree/widget/whatever by following the links. Also, I think its best to
 include this in the API directly since not all users are integrating
 using the python-heatclient.
 +1 for adding the stack name/id to the output to maintain a reference to
 the initial stack that the resource belongs to. The original stated
 use-case that I am aware of was to have a flat list of all resources
 associated with a stack to be displayed in the UI when the user prompts to
 delete a stack. This would prevent confusion about what and why different
 resources are being deleted due to the stack delete.
 
 This use-case does not require any information about the nested stacks but
 I can foresee that information being useful in the future. I think a
 flattened data structure (with a reference to stack id) is still the most
 efficient solution. The patch landed by Anderson/Richard provides an
 alternate method to drill down into nested stacks if the hierarchy is
 important information though this is not the optimal solution in this
 case.
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Smarter timeouts in Tempest?

2014-05-20 Thread Matt Riedemann



On 5/19/2014 1:25 PM, Sean Dague wrote:

On 05/19/2014 02:13 PM, Matt Riedemann wrote:



On 5/19/2014 11:33 AM, Matt Riedemann wrote:



On 5/19/2014 10:53 AM, Matt Riedemann wrote:

I was looking through this timeout bug [1] this morning and am able to
correlate that around the time of the image snapshot timeout, ceilometer
was really hammering CPU on the host.  There are already threads on
ceilometer performance and how that needs to be improved for Tempest
runs so I don't want to get into that here.

What I'm thinking about is if there is a way to be smarter about how we
do timeouts in the tests, rather than just rely on globally configured
hard-coded timeouts which are bound to fail intermittently in dynamic
environments like this.

I'm thinking something along the lines of keeping track of CPU stats on
intervals in our waiter loops, then when we reach our configured
timeout, calculate the average CPU load/idle and if it falls below some
threshold, we cut the timeout in half and redo the timeout loop - and we
continue that until our timeout reaches some level that no longer makes
sense, like once it drops less than a minute for example.

Are there other ideas here?  My main concern is the number of random
timeout failures we see in the tests and then people are trying to
fingerprint them with elastic-recheck but the queries are so generic
they are not really useful.  We now put the test class and test case in
the compute test timeout messages, but it's also not very useful to
fingerprint every individual permutation of test class/case that we can
hit a timeout in.

[1] https://bugs.launchpad.net/nova/+bug/1320617



This change to devstack should help [1].

It would be good if we actually used the default timeouts we have
configured in Tempest rather than hard-coding them in devstack based on
the latest state of the gate at the time.

[1] https://review.openstack.org/#/c/94221/



I have a proof of concept up for Tempest with adjusted timeouts based on
CPU idle values here:

https://review.openstack.org/#/c/94245/


The problem is this makes an assumption that Tempest is on the host
where the services are. We actually need to get away from that assumption.

If there is something in ceilometer that would make sense to poll, that
might be an option. But psutils definitely can't be a thing we us here.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I've abandoned both changes in great shame. :P

To target the timeout failure in the image snapshot tests and bug 
1320617, I'm looking to see if maybe the experimental tasks API in 
glance v2 could help get some diagnostic information at the point of 
failure to see if things are just slow or if the snapshot is actually 
hung and will never complete.


Ideally we could leverage tasks across the services for debugging issues 
like this.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Nominate Trevor McKay for sahara-core

2014-05-20 Thread Sergey Lukjanov
Trevor,

you're added to all core-related teams now, if you'll need help, feel free
to contact me and other team members.

Thanks.

On Tuesday, May 20, 2014, Telles Nobrega tellesnobr...@gmail.com wrote:

 +1


 On Mon, May 19, 2014 at 11:13 AM, Sergey Lukjanov 
 slukja...@mirantis.comjavascript:_e(%7B%7D,'cvml','slukja...@mirantis.com');
  wrote:

 Trevor, congrats!

 welcome to the sahara-core.

 On Thu, May 15, 2014 at 11:41 AM, Matthew Farrellee 
 m...@redhat.comjavascript:_e(%7B%7D,'cvml','m...@redhat.com');
 wrote:
  On 05/12/2014 05:31 PM, Sergey Lukjanov wrote:
 
  Hey folks,
 
  I'd like to nominate Trevor McKay (tmckay) for sahara-core.
 
  He is among the top reviewers of Sahara subprojects. Trevor is working
  on Sahara full time since summer 2013 and is very familiar with
  current codebase. His code contributions and reviews have demonstrated
  a good knowledge of Sahara internals. Trevor has a valuable knowledge
  of EDP part and Hadoop itself. He's working on both bugs and new
  features implementation.
 
  Some links:
 
  http://stackalytics.com/report/contribution/sahara-group/30
  http://stackalytics.com/report/contribution/sahara-group/90
  http://stackalytics.com/report/contribution/sahara-group/180
 
 
 https://review.openstack.org/#/q/owner:tmckay+sahara+AND+-status:abandoned,n,z
  https://launchpad.net/~tmckay
 
  Sahara cores, please, reply with +1/0/-1 votes.
 
  Thanks.
 
 
  +1
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgjavascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgjavascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 --
 Telles Mota Vidal Nobrega
 Bsc in Computer Science at UFCG
 Software Engineer at PulsarOpenStack Project - HP/LSD-UFCG



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Docker] Resource

2014-05-20 Thread Andrew Plunk
Hello All,

The purpose of this email is to document a few discussions from the summit, and 
to facilitate communication between parties at Docker and the Heat community.

The way the Docker resource is currently implemented requires the remote Docker 
api to be enabled on the compute instances that Heat wants to create containers 
on. The way Docker suggests securing the remote api is by using tls client 
certificates signed by  a trusted CA used to start up the docker api 
(http://docs.docker.io/examples/https/). This presents a problem for Heat 
because certificates would have to be added to Heat for each Docker resource 
(or per stack) in order to have secure communication, which creates a 
scalability problem, and requires Heat to store customer secrets.

The solution I propose to this problem is to integrate docker with software 
config, which would allow the Docker api running on a compute instance to 
listen on an unix socket while still being able to communicate with the Heat 
engine. I have created a blueprint to capture this proposal:

https://blueprints.launchpad.net/heat/+spec/software-config-docker

Any input on this proposal is welcome.

Thanks everyone!
-Andrew Plunk
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] failing postgres jobs

2014-05-20 Thread Sergey Lukjanov
As I see, the 94315 merged atm, is the issue fixed?

On Tuesday, May 20, 2014, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 If you hit an unknown error in a postgres job since Tue May 20 00:30:48
 2014 + you probably hit *https://bugs.launchpad.net/trove/+bug/1321093
 https://bugs.launchpad.net/trove/+bug/1321093*
 (*-tempest-dsvm-postgres-full failing on trove-manage db_sync)

 A fix is in the works: https://review.openstack.org/#/c/94315/

 so once the fix lands, just run 'recheck bug 1321093'

 Additional patches are up to prevent this from happening again as well
 [0][1].

 best,
 Joe

 [0] https://review.openstack.org/#/c/94307/
 [1] https://review.openstack.org/#/c/94314/



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-20 Thread Zane Bitter

On 20/05/14 12:17, Jay Pipes wrote:

Hi Zane, sorry for the delayed response. Comments inline.

On 05/06/2014 09:09 PM, Zane Bitter wrote:

On 05/05/14 13:40, Solly Ross wrote:

One thing that I was discussing with @jaypipes and @dansmith over
on IRC was the possibility of breaking flavors down into separate
components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
This way, you still get the control of the size of your building blocks
(e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid
exponential flavor explosion by separating out the axes.


Dimitry and I have discussed this on IRC already (no-one changed their
mind about anything as a result), but I just wanted to note here that I
think even this idea is crazy.

VMs are not allocated out of a vast global pool of resources. They're
allocated on actual machines that have physical hardware costing real
money in fixed ratios.

Here's a (very contrived) example. Say your standard compute node can
support 16 VCPUs and 64GB of RAM. You can sell a bunch of flavours:
maybe 1 VCPU + 4GB, 2 VCPU + 8GB, 4 VCPU + 16GB... c. But if (as an
extreme example) you sell a server with 1 VCPU and 64GB of RAM you have
a big problem: 15 VCPUs that nobody has paid for and you can't sell.
(Disks add a new dimension of wrongness to the problem.)


You are assuming a public cloud provider use case above. As much as I
tend to focus on the utility cloud model, where the incentives are
around maximizing the usage of physical hardware by packing in as many
paying tenants into a fixed resource, this is only one domain for
OpenStack.


I was assuming the use case advanced in this thread, which sounded like 
a semi-public cloud model.


However, I'm actually trying to argue from a higher level of abstraction 
here. In any situation where there are limited resources, optimal 
allocation of those resources will occur when the incentives of the 
suppliers and consumers of said resources are aligned, independently of 
whose definition of optimal you use. This applies equally to public 
clouds, private clouds, lemonade stands, and the proverbial two guys 
stranded on a desert island. In other words, it's an immutable property 
of economies, not anything specific to one use case.



There are, for good or bad, IT shops and telcos that frankly are willing
to dump money into an inordinate amount of hardware -- and see that
hardware be inefficiently used -- in order to appease the demands of
their application customer tenants. The impulse of onboarding teams for
these private cloud systems is to just say yes, with utter disregard
to the overall cost efficiency of the proposed customer use cases.


Fine, but what I'm saying is that you can just give the customer _more_ 
than they really wanted (i.e. round up to the nearest flavour). You can 
charge them the same if you want - you can even decouple pricing from 
the flavour altogether if you want. But what you can't do is assume 
that, just because you gave the customer exactly what they needed and 
not one kilobyte more, you still get to use/sell the excess capacity you 
didn't allocate to them. Because you may not.



If there was a simple switching mechanism that allowed a deployer to
turn on or off this ability to allow tenants to construct specialized
instance type configurations, then who really loses here? Public or
utility cloud providers would simply leave the switch to its default of
off and folks who wanted to provide this functionality to their users
could provide it. Of course, there are clear caveats around lack of
portability to other clouds -- but let's face it, cross-cloud
portability has other challenges beyond this particular point ;)


The insight of flavours, which is fundamental to the whole concept of
IaaS, is that users must pay the *opportunity cost* of their resource
usage. If you allow users to opt, at their own convenience, to pay only
the actual cost of the resources they use regardless of the opportunity
cost to you, then your incentives are no longer aligned with your
customers.


Again, the above assumes a utility cloud model. Sadly, that isn't the
only cloud model.


The only assumption is that resources are not (effectively) unlimited.


You'll initially be very popular with the kind of customers
who are taking advantage of you, but you'll have to hike prices across
the board to make up the cost leading to a sort of dead-sea effect. A
Gresham's Law of the cloud, if you will, where bad customers drive out
good customers.

Simply put, a cloud allowing users to define their own flavours *loses*
to one with predefined flavours 10 times out of 10.

In the above example, you just tell the customer: bad luck, you want
64GB of RAM, you buy 16 VCPUs whether you want them or not. It can't
actually hurt to get _more_ than you wanted, even though you'd rather
not pay for it (provided, of course, that everyone else *is* paying for
it, and cross-subsidising you... which they won't).

Now, it's 

Re: [openstack-dev] [infra] Meeting Tuesday May 20th at 19:00 UTC

2014-05-20 Thread Elizabeth K. Joseph
On Mon, May 19, 2014 at 9:40 AM, Elizabeth K. Joseph
l...@princessleia.com wrote:
 Hi everyone,

 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting on Tuesday May 20th, at 19:00 UTC in #openstack-meeting

Great post-summit meeting, thanks to everyone who joined us.

Minutes and logs here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-20-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-20-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-20-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] failing postgres jobs

2014-05-20 Thread Nikhil Manchanda
Yes, this issue is fixed now that 94315 is merged.


On Tue, May 20, 2014 at 3:38 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 As I see, the 94315 merged atm, is the issue fixed?


 On Tuesday, May 20, 2014, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 If you hit an unknown error in a postgres job since Tue May 20 00:30:48
 2014 + you probably hit *https://bugs.launchpad.net/trove/+bug/1321093
 https://bugs.launchpad.net/trove/+bug/1321093*
 (*-tempest-dsvm-postgres-full failing on trove-manage db_sync)

 A fix is in the works: https://review.openstack.org/#/c/94315/

 so once the fix lands, just run 'recheck bug 1321093'

 Additional patches are up to prevent this from happening again as well
 [0][1].

 best,
 Joe

 [0] https://review.openstack.org/#/c/94307/
 [1] https://review.openstack.org/#/c/94314/



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-20 Thread Stephen Wong
Hi,

I am part of the ServiceVM team and I will attend the NFV IRC meetings.

Thanks,
- Stephen


On Tue, May 20, 2014 at 8:59 AM, Chris Wright chr...@sous-sol.org wrote:

 * balaj...@freescale.com (balaj...@freescale.com) wrote:
   -Original Message-
   From: Kyle Mestery [mailto:mest...@noironetworks.com]
   Sent: Tuesday, May 20, 2014 12:19 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit
  
   On Mon, May 19, 2014 at 1:44 PM, Ian Wells ijw.ubu...@cack.org.uk
   wrote:
I think the Service VM discussion resolved itself in a way that
reduces the problem to a form of NFV - there are standing issues
 using
VMs for services, orchestration is probably not a responsibility that
lies in Neutron, and as such the importance is in identifying the
problems with the plumbing features of Neutron that cause
implementation difficulties.  The end result will be that VMs
implementing tenant services and implementing NFV should be much the
same, with the addition of offering a multitenant interface to
   Openstack users on the tenant service VM case.
   
Geoff Arnold is dealing with the collating of information from people
that have made the attempt to implement service VMs.  The problem
areas should fall out of his effort.  I also suspect that the key
points of NFV that cause problems (for instance, dealing with VLANs
and trunking) will actually appear quite high up the service VM list
 as
   well.
--
   There is a weekly meeting for the Service VM project [1], I hope some
   representatives from the NFB sub-project can make it to this meeting
 and
   participate there.
  [P Balaji-B37839] I agree with Kyle, so that we will have enough synch
 between Service VM and NFV goals.

 Makes good sense.  Will make sure to get someone there.

 thanks,
 -chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Juno Roadmap

2014-05-20 Thread Kurt Griffiths
Hi folks, I took the major work items we discussed at the summit and placed 
them into the three Juno milestones:

https://wiki.openstack.org/wiki/Roadmap_(Marconi)

Let me know what you think over the next few days. We will address any 
remaining questions and concerns at our next team meeting (next Tuesday at 1500 
UTC in #openstack-meeting-alt).

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Meeting Minutes - 2014-05-20

2014-05-20 Thread Brian Curtin
Next meeting will be 2014-05-27

Minutes: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-05-20-19.00.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-05-20-19.00.txt

Log: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-05-20-19.00.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][UX] Improving the User Experience of Messaging in Horizon

2014-05-20 Thread Liz Blanchard
Hi All,

I put together a page on the wiki [1] capturing a first draft of some ideas on 
how to improve the User Experience of the messaging in Horizon. These are not 
technical and really just focus on the presentation layer of these messages, 
but it would be great to see this be expanded or additional proposals be 
created around some of the technical discussions that need to take place for 
improving messaging. 

Please feel free to add to/edit this page as you think makes sense. Also, any 
feedback is much appreciated.

Thanks,
Liz

[1] 
https://wiki.openstack.org/wiki/UX/Improve_User_Experience_of_Messaging_in_Horizon___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Docker] Resource

2014-05-20 Thread Eric Windisch


  The solution I propose to this problem is to integrate docker with
 software config, which would allow the Docker api running on a compute
 instance to listen on an unix socket


First, thank you for looking at this.

Docker already listens on a unix socket. I'm not as familiar with Heat's
'software config' as I should be, although I attended a couple sessions on
it last week. I'm not sure how this solves the problem? Is the plan to have
the software-config-agent communicate over the network to/from Heat, and to
the instance's local unix socket?

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ReviewStat] WebUI for review stat

2014-05-20 Thread Nachi Ueno
Hi folks

I could deploy it on openshift
http://reviewstat-nachi.rhcloud.com/

2014-05-19 20:36 GMT-07:00 Nachi Ueno na...@ntti3.com:
 Hi Boris

 Ya, I know this stats.
 The primary usecase for this patch is helping reviewer assignment for
 each patch.

 2014-05-19 18:50 GMT-07:00 Boris Pavlovic bo...@pavlovic.me:
 Hi Nachi,


 Sorry for question, but did you try stackalytics?

 E.g. this page http://stackalytics.com/report/contribution/neutron/30

 Best regards,
 Boris Pavlovic




 On Tue, May 20, 2014 at 5:20 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 As per neutron discussion,  we agreed there is really important to
 distribute
  core-reviewer loads.
 so we got an idea to specify primary/secondary reviewers for each reviews.

 IMO, this is almost impossible without some helper tool.

 so, I wrote it.

 https://www.youtube.com/watch?v=ouS5h5Z-W50feature=youtu.be

 This is a review for review stats.
 https://review.openstack.org/#/c/94288/
 # Note, this code is my side work,, so hopefully we can get lower bar
 for review :)

 sudo python setup.py develop
 ./run_server.sh
 open http://127.0.0.1:8080/

 This webui identifies primary/secondary reviewers from comment.
 so if you put the comment like this,

 primary: mestery, secondary:maru

 Kyle, and Maru will be assigned as primary/secondary reviewer.
 This is regexp primary:\s*(.+),\s*secondary:\s*(.+)

 Otherwise, this webui selects primary/secondary from existing
 core-reviewer who are reviewing the change set.

 Some my thoughts for next steps.

 (1) core-reviewer-scheduler
 Currently, 208 patch needs core in Neutron related projects.
 I feel manual assignment by discussion won't work in this scale...

 I think we need core-reviewer-scheduler such as nova-scheduler too.
 # I'm not sure cores likes this idea, though

 (2) launchpad integration
 Yep, it is great if we can also show priorities by launchpad data in here.

 (3) Hosting this code in somewhere, (hopefully, in infra)
 (4) Performance impact for review.openstack.org
 This code is using a heavy rest api call (get all reviews,comment,
 reviews for all patches for a project). I hope this code won't kill
 the review.openstack.org

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Docker] Resource

2014-05-20 Thread Clint Byrum
Excerpts from Andrew Plunk's message of 2014-05-20 13:49:58 -0700:
 No Problem.
 
 As the Docker resource in Heat currently works, it will require Docker 
 running on a customer's vm to listen over a network socket. With software 
 config you could allow Docker to listen on the instance's local unix socket, 
 and communicate with Docker via Heat's in instance software config agents.
 

This would effectively make Docker another case of syntactic sugar,
much like OS::SoftwareConfig::Chef would be.

Short term I think this will get Docker onto Heat users' instances
quicker.

However I do think that the limitations of this approach are pretty
large, such as how to model the network in such a way where we could
attach a floating IP to a container running in a VM. There's a lot of
extra plumbing that gets shrugged off to user-managed hosts that seems
like a generic nesting interface that would be useful for things like
TripleO too where we want to nest a VM inside the deployment cloud.

Anyway, the local socket is the way to go. The less we teach Heat's
engine to reach out to non-OpenStack API's, the more we can keep it
isolated from hostile entities.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Mistral roadmap notes, from Atlanta summit (rough)

2014-05-20 Thread Dmitri Zimine
We shared and discussed the directions on Mistral development session,
and work through the list of smaller post POC steps, placing them on the 
roadmap.

The notes are here: https://etherpad.openstack.org/p/juno-summit-mistral.
Renat and I developed shared understanding on most of them. 

Next steps: create the blueprints, prioritize, implement. 

DZ.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API refactoring

2014-05-20 Thread Mandeep Dhami
Renewing the thread, is there a blueprint for this refactoring effort?

In the email thread till now, we have just had an etherpad link. I would
like to get more deeply involved in design/implementation and review of
these changes and I get a feeling that not being able to attend the Atlanta
summit is going to be a significant barrier to participation in this
critical effort.

Regards,
Mandeep



On Thu, May 15, 2014 at 10:48 AM, Mandeep Dhami dh...@noironetworks.comwrote:


 Thanks for the link, Fawad. I had actually seen the etherpad, but I was
 hoping that there was a design document backing it up.

 Regards,
 Mandeep


 On Thu, May 15, 2014 at 10:15 AM, Fawad Khaliq fa...@plumgrid.com wrote:

 Hi Mandeep,

 You can find discussion/details in the etherpad link[1].

 [1] https://etherpad.openstack.org/p/refactoring-the-neutron-core

 Thanks,

 Fawad Khaliq
 (m) +1 408.966.2214


 On Thu, May 15, 2014 at 9:40 AM, Mandeep Dhami 
 dh...@noironetworks.comwrote:

 Hi:

 I am not at the conference this week, but it is my understanding that
 there was a proposal for neutron core API refactoring discussed yesterday.
 I am trying to catch up with that discussion, is there a formal design
 description or blueprint that I can review?

 Thanks,
 Mandeep
 -


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Docker] Resource

2014-05-20 Thread Andrew Plunk
This would effectively make Docker another case of syntactic sugar,
much like OS::SoftwareConfig::Chef would be.

Short term I think this will get Docker onto Heat users' instances
quicker.

Agreed.

However I do think that the limitations of this approach are pretty
large, such as how to model the network in such a way where we could
attach a floating IP to a container running in a VM. There's a lot of
extra plumbing that gets shrugged off to user-managed hosts that seems
like a generic nesting interface that would be useful for things like
TripleO too where we want to nest a VM inside the deployment cloud.


I do not think limiting the Docker api to listening to a unix socket

on a compute instance would stop one from being able to attach a
floating ip to a container running in a VM. One would have to start
the docker container mapped to a port on the host vm that was not
firewalled. Docker already has support for mapping host to container
ports, so the software config agent would have to just pass those options
along.

-Andrew

On 5/20/14 4:16 PM, andrew plunk andrewdev...@gmail.com wrote:
Excerpts from Andrew Plunk's message of 2014-05-20 13:49:58 -0700:
 No Problem.

 As the Docker resource in Heat currently works, it will require Docker
running on a customer's vm to listen over a network socket. With
software config you could allow Docker to listen on the instance's local
unix socket, and communicate with Docker via Heat's in instance software
config agents.


This would effectively make Docker another case of syntactic sugar,
much like OS::SoftwareConfig::Chef would be.

Short term I think this will get Docker onto Heat users' instances
quicker.

However I do think that the limitations of this approach are pretty
large, such as how to model the network in such a way where we could
attach a floating IP to a container running in a VM. There's a lot of
extra plumbing that gets shrugged off to user-managed hosts that seems
like a generic nesting interface that would be useful for things like
TripleO too where we want to nest a VM inside the deployment cloud.

Anyway, the local socket is the way to go. The less we teach Heat's
engine to reach out to non-OpenStack API's, the more we can keep it
isolated from hostile entities.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-20 Thread Dimitri Mazmanov
Hi!
Comments inline.

On 20/05/14 21:58, Zane Bitter zbit...@redhat.com wrote:

On 20/05/14 12:17, Jay Pipes wrote:
 Hi Zane, sorry for the delayed response. Comments inline.

 You are assuming a public cloud provider use case above. As much as I
 tend to focus on the utility cloud model, where the incentives are
 around maximizing the usage of physical hardware by packing in as many
 paying tenants into a fixed resource, this is only one domain for
 OpenStack.

I was assuming the use case advanced in this thread, which sounded like
a semi-public cloud model.

However, I'm actually trying to argue from a higher level of abstraction
here. In any situation where there are limited resources, optimal
allocation of those resources will occur when the incentives of the
suppliers and consumers of said resources are aligned, independently of
whose definition of optimal you use. This applies equally to public
clouds, private clouds, lemonade stands, and the proverbial two guys
stranded on a desert island. In other words, it's an immutable property
of economies, not anything specific to one use case.

This makes perfect sense. I¹d add one tiny bit though. ³Šoptimal of those
resource will *eventually* occurŠ².
For clouds, by rounding up to the nearest flavour you actually leave no
space for optimisation. Even for the lemonade stands you¹d first observe
what people prefer most before deciding on optimal allocation of water or
soda bottles :)


 There are, for good or bad, IT shops and telcos that frankly are willing
 to dump money into an inordinate amount of hardware -- and see that
 hardware be inefficiently used -- in order to appease the demands of
 their application customer tenants. The impulse of onboarding teams for
 these private cloud systems is to just say yes, with utter disregard
 to the overall cost efficiency of the proposed customer use cases.

+1. I¹d also add to add support of legacy applications as another reason
for the utter disregard²


Fine, but what I'm saying is that you can just give the customer _more_
than they really wanted (i.e. round up to the nearest flavour). You can
charge them the same if you want - you can even decouple pricing from
the flavour altogether if you want. But what you can't do is assume
that, just because you gave the customer exactly what they needed and
not one kilobyte more, you still get to use/sell the excess capacity you
didn't allocate to them. Because you may not.

Like I said above, if you round up you most definitely don¹t get to use
the excess capacity.
Also, where exactly would you place this rounding up functionality? Heat?
Nova? A custom script that runs before deployment? Assume the tenant
doesn¹t know what flavours are available, because template creation is
done automatically outside of the cloud environment.


 If there was a simple switching mechanism that allowed a deployer to
 turn on or off this ability to allow tenants to construct specialized
 instance type configurations, then who really loses here? Public or
 utility cloud providers would simply leave the switch to its default of
 off and folks who wanted to provide this functionality to their users
 could provide it. Of course, there are clear caveats around lack of
 portability to other clouds -- but let's face it, cross-cloud
 portability has other challenges beyond this particular point ;)

 The insight of flavours, which is fundamental to the whole concept of
 IaaS, is that users must pay the *opportunity cost* of their resource
 usage. If you allow users to opt, at their own convenience, to pay only
 the actual cost of the resources they use regardless of the opportunity
 cost to you, then your incentives are no longer aligned with your
 customers.

 Again, the above assumes a utility cloud model. Sadly, that isn't the
 only cloud model.

__
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-
Dimitri


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] - Integration with neutron using external attachment point

2014-05-20 Thread Russell Haering
We've been experimenting some with how to use Neutron with Ironic here at
Rackspace.

Our very experimental code:
https://github.com/rackerlabs/ironic-neutron-plugin

Our objective is the same as what you're describing, to allow Nova servers
backed by Ironic to attach to arbitrary Neutron networks. We're initially
targeting VLAN-based networks only, but eventually want to do VXLAN from
the top-of-rack switches, controlled via an SDN controller.

Our approach is a little different than what you're describing though. Our
objective is to modify the existing Nova - Neutron interaction as little
as possible, which means approaching the problem by thinking how would an
L2 agent do this?.

The workflow looks something like:

1. Nova calls Neutron to create a virtual port. Because this happens
_before_ Nova touches the virt driver, the port is at this point identical
to one created for a virtual server.
2. Nova executes the spawn method of the Ironic virt driver, which makes
some calls to Ironic.
3. Inside Ironic, we know about the physical switch ports that the selected
Node is connected to. This information is discovered early-on using LLDP
and stored in the Ironic database.
4. We actually need the node to remain on an internal provisioning VLAN for
most of the provisioning process, but once we're done with on-host work we
turn the server off.
5. Ironic deletes a Neutron port that was created at bootstrap time to
trunk the physical switch ports for provisioning.
6. Ironic updates each of the customer's Neutron ports with information
about its physical switch port.
6. Our Neutron extension configures the switches accordingly.
7. Then Ironic brings the server back up.

The destroy process basically does the reverse. Ironic removes the physical
switch mapping from the Neutron ports, re-creates an internal trunked port,
does some work to tear down the server, then passes control back to Nova.
At that point Nova can do what it wants with the Neutron ports.
Hypothetically that could include allocating them to a different Ironic
Node, etc, although in practice it just deletes them.

Again, this is all very experimental in nature, but it seems to work fairly
well for the use-cases we've considered. We'd love to find a way to
collaborate with others working on similar problems.

Thanks,
Russell


On Tue, May 20, 2014 at 7:17 AM, Akihiro Motoki amot...@gmail.com wrote:

 # Added [Neutron] tag as well.

 Hi Igor,

 Thanks for the comment. We already know them as I commented
 in the Summit session and ML2 weekly meeting.
 Kevin's blueprint now covers Ironic integration and layer2 network gateway
 and I believe campus-network blueprint will be covered.

 We think the work can be split into generic API definition and
 implementations
 (including ML2). In external attachment point blueprint review, API
 and generic topics are mainly discussed so far and the detail
 implementation is not discussed
 so much yet. ML2 implementation detail can be discussed later
 (separately or as a part of the blueprint review).

 I am not sure what changes proposed in Blueprint [1].
 AFAIK SDN/OpenFlow controller based approach can support this,
 but how can we archive this in the existing open source implementation.
 I am also interested in the ML2 implementation detail.

 Anyway more input will be appreciated.

 Thanks,
 Akihiro

 On Tue, May 20, 2014 at 7:13 PM, Igor Cardoso igordc...@gmail.com wrote:
  Hello Kevin.
  There is a similar Neutron blueprint [1], originally meant for Havana but
  now aiming for Juno.
  I would be happy to join efforts with you regarding our blueprints.
  See also: [2].
 
  [1] https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
  [2] https://blueprints.launchpad.net/neutron/+spec/campus-network
 
 
  On 19 May 2014 23:52, Kevin Benton blak...@gmail.com wrote:
 
  Hello,
 
  I am working on an extension for neutron to allow external attachment
  point information to be stored and used by backend plugins/drivers to
 place
  switch ports into neutron networks[1].
 
  One of the primary use cases is to integrate ironic with neutron. The
  basic workflow is that ironic will create the external attachment points
  when servers are initially installed. This step could either be
 automated
  (extract switch-ID and port number of LLDP message) or it could be
 manually
  performed by an admin who notes the ports a server is plugged into.
 
  Then when an instance is chosen for assignment and the neutron port
 needs
  to be created, the creation request would reference the corresponding
  attachment ID and neutron would configure the physical switch port to
 place
  the port on the appropriate neutron network.
 
  If this workflow won't work for Ironic, please respond to this email or
  leave comments on the blueprint review.
 
  1. https://review.openstack.org/#/c/87825/
 
 
  Thanks
  --
  Kevin Benton
 
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] [TripleO] Use of environment variables in tripleo-incubator

2014-05-20 Thread James Polley
I spoke to JP offline and confirmed that the link to 85418 should have been
a link to https://review.openstack.org/#/c/88252

I think that
https://etherpad.openstack.org/p/tripleo-incubator-rationalise-ui and
https://etherpad.openstack.org/p/tripleo-devtest.sh-refactoring-blueprintare
the closest things to documentation we've got about this. Now that we
have the specs repo, perhaps we should be creating a spec and moving the
discussion there.




On Tue, May 20, 2014 at 12:06 PM, Sullivan, Jon Paul 
jonpaul.sulli...@hp.com wrote:

  Hi,



 There are a number of reviews[1][2] where new environment variables are
 being disliked, leading to -1 or -2 code reviews because new environment
 variables are added.  It is looking like this is becoming a policy.



 If this is a policy, then could that be stated, and an alternate mechanism
 made available so that any reviews adding environment variables can use the
 replacement mechanism, please?



 Otherwise, some guidelines for developers where environment variables are
 acceptable or not would equally be useful.



 [1] https://review.openstack.org/85009

 [2] https://review.openstack.org/85418



 Thanks,

 ,: jonpaul.sulli...@hp.com J *Cloud Services - @hpcloud*
 (: +353 (91) 75 4169



 Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park,
 Galway.

 Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John
 Rogerson's Quay, Dublin 2.

 Registered Number: 361933



 The contents of this message and any attachments to it are confidential
 and may be legally privileged. If you have received this message in error
 you should delete it from your system immediately and advise the sender.



 To any recipient of this message within HP, unless otherwise stated, you
 should consider this message and attachments as HP CONFIDENTIAL.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] devstack w/ neutron in vagrant - floating IPs

2014-05-20 Thread Paul Czarkowski
Has anyone had any success with running devstack and neutron in a vagrant 
machine where the floating Ips are accessible from outside of the vagrant box ( 
I.e. From the host ).

I’ve spent a few hours trying to get it working without any real success.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] summary of scheduler sessions at the Juno design summit

2014-05-20 Thread Dugger, Donald D
Here is a brief rundown on the majority of the scheduler sessions from the 
summit, links to the etherpads and some of my incoherent notes from the 
sessions.  Feel free to reply to this email to correct any mistakes I made and 
to add any other thoughts you might have:


1)  Future of gantt interfaces  APIs (Sylvain Bauza)
https://etherpad.openstack.org/p/juno-nova-gantt-apis
As from the last summit everyone agrees that yes a separate scheduler project 
is desirable but we need to clean up the interfaces between Nova and the 
scheduler first.  There are 3 main areas that need to be cleaned up first 
(proxying for booting instances, a library to isolate the scheduler and isolate 
access to DB objects).  We have BPs created for all of these areas so we need 
to implement those BPs first, all of that work happening in the current Nova 
tree.  After those 3 steps are done we need to check for any other external 
dependencies (hopefully there aren't any) and then we can split the code out 
into the gantt repository.


2)  Common no DB scheduler (Boris)
https://etherpad.openstack.org/p/juno-nova-no-db-scheduler
Pretty much agreement that the new no-db scheduler needs to be 
switchable/configurable so that it can be selected at run time, don't want to 
do a flash cut that requires everyone to suddenly switch to the new 
architecture, it should be possible to design this such that the node state 
info, currently kept in the DB, can be handled by a back end that can either 
use the current DB methods or the new no-db methods.

Much discussion over the fact that the current patches use the memcached to 
contain a journal of all update messages about node state change which means 
that the scheduler will just be re-inventing journaling problems/solutions that 
are well handled by current DBs.  Another idea would be to use the memcached 
area to hold complete state info for each node, using a counter mechanism to 
know when the data is out of date.  Need to evaluate the pros/cons of different 
memcached designs.


3)  Simultaneous scheduling for server groups (Mike Spreitzer)
https://etherpad.openstack.org/p/juno-nova-scheduling-server-groups
The basic idea is a desire to schedule a group of hosts in one call (more 
consistent with server groups) rather than multiple scheduler calls for one 
node at a time.  Talking about this the real issue seem to be a resource 
reservation problem, the user wants to reserve a set of nodes and then, given 
the reservation succeeds, do the actual scheduling task.  As such, this sounds 
like something that maybe should be integrated in with the climate and/or heat. 
 Need to do some more research to see if this problem can be addressed and/or 
helped by either of those projects.


4)  Scheduler hints for VM lifecycle (Jay Lau)
https://etherpad.openstack.org/p/juno-nova-scheduler-hints-vm-lifecycle
Basic problem is that server hints are only instance instantiation time, the 
info is then lost and not available for migration decisions, need to store the 
hints somehow.  We could create a new table to hold the hints, we could add a 
new (arbitrary blob) field to the instances table or we could store the info in 
the system metadata which means we might need to resizulate the thingotron 
(that was the actual comment, interpretation is left to the reader :)  No clear 
consensus on what to do, more research needed.



--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-20 Thread Chris Wright
* Stephen Wong (s3w...@midokura.com) wrote:
 I am part of the ServiceVM team and I will attend the NFV IRC meetings.

Great, thank you Stephen.

cheers,
-chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] Status of v3 tests in tempest

2014-05-20 Thread GHANSHYAM MANN
I also agree that instead of removing the old test, we keep changing those
as microversion gets changed.
One suggestion (may be same as what Chris is thinking)-
-Tempest can keep the common test directory containing tests which are
going to be same as microversion bump up. Initially all test can go to
common directory and keep filtering the variant tests as microversion
progress.
-As microversion gets changed, tempest will override the tests for those
API which are being changed and will run the other common tests also for
this Test version.
For example-
[image: Inline image 1]



-- 
Thanks
Ghanshyam Mann

On Mon, May 19, 2014 at 9:39 PM, Christopher Yeoh cbky...@gmail.com wrote:

 On Mon, May 19, 2014 at 9:12 PM, David Kranz dkr...@redhat.com wrote:

  On 05/19/2014 01:24 PM, Frittoli, Andrea (HP Cloud) wrote:

 Thanks for bringing this up.

 We won't be testing v3 in Juno, but we'll need coverage for v2.1.

 In my understanding will be a v2 compatible API - so including proxy to
 glance cinder and neutron - but with micro-versions to bring in v3 features
 such as CamelCase and Tasks.
 So we should be able to reuse a good chunk of the v3 test code for testing
 v2.1. Adding some config options for the v2.1 to v3 differences we could try
 and use the same tests for icehouse v3 and juno v2.1.

  While it is true that we may reuse some of the actual test code
 currently in v3, the overall code structure for micro-versions will be
 much different than for a parallel v2/v3. I wanted to make sure every
 one  on the qa list knows that v3 is being scrapped and that we should stop
 making changes that are intended only to enhance the maintainability of an
 active v2/v3 scenario.



 So I think we need to distinguish between v3 being scrapped and v3
 features being scrapped. I think its likely that most of the v3
 cleanups/features will end up being exposed via client microversions (its
 what I sort of asked about near the end of the session). And by removing
 the tests we will inevitably end up with regressions which we don't want to
 happen.

 I think its pretty important we sort out the microversion design on the
 Nova side pretty quickly and we could adapt the existing v3 tempest tests
 to instead respond with a very high version microversion number. As we roll
 out new features or accept v3 changes in Nova with microversions,
 individual tests can then be changed to respond to the lower microversion
 numbers. That way we keep existing regression tests so we don't regress on
 the Nova side and don't need to rewrite them at a later date for tempest.
 Depending on how the client microversion design works this might make code
 duplication issues on the tempest side easier to handle - though we're
 going to need a pretty generic solution to support API testing of
 potentially quite a few versions of individual APIs as depending on the
 microversion.  Every time we bump the microversion we essentially just add
 a new version to be tested, we don't replace the old one.

 There is one big implication for tempest regarding micoversions for Nova -
 scenario testing. With microversions we need to support testing for quite a
 few versions of slightly different APIs rather than just say 2. And there's
 some potential for quite a few different combinations especially if other
 projects go the microversion route as well.



 With regard to icehouse, my understanding is that we are basically
 deprecating v3 as an api before it was ever declared stable. Should we
 continue to carry technical debt in tempest to support testing the unstable
 v3 in icehouse? Another alternative, if we really want to continue testing
 v3 on icehouse but want to remove v3 from tempest, would be to create a
 stable/icehouse branch in tempest and run that against changes to
 stable/icehouse in projects in addition to running tempest master.

  -David

  We may have to implement support for micro-versions in tempests own rest
 client as well.

 andrea


 -Original Message-
 From: David Kranz [mailto:dkr...@redhat.com dkr...@redhat.com]
 Sent: 19 May 2014 10:49
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [qa][nova] Status of v3 tests in tempest

 It seems the nova team decided in Atlanta that v3 as currently understood
 is never going to exist:https://etherpad.openstack.org/p/juno-nova-v3-api.

 There are a number of patches in flight that tweak how we handle supporting
 both v2/v3 in tempest to reduce duplication.
 We need to decide what to do about this. At a minimum, I think we should
 stop any work that is inspired by any v3-related activity except to revert
 any v2/v3 integration that was already done. We should really rip out the v3
 stuff that was recently added. I know Matt had some concern about that
 regarding testing v3 in stable/icehouse but perhaps he can say more.

   -David

 ___
 OpenStack-dev mailing 
 

Re: [openstack-dev] [nova] 3rd party CI requirements for DB2

2014-05-20 Thread Michael Still
On Mon, May 19, 2014 at 10:35 PM, Joe Gordon joe.gord...@gmail.com wrote:
 On Wed, May 14, 2014 at 6:58 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:

 I'd like to get some more discussion going for the nova-spec on adding DB2
 support [1] especially since we didn't get to the topic for non-virt driver
 3rd party CI in the nova design summit session this morning.

 Basically, there are three possible 3rd party CIs to run for a backend
 database engine:

 1. tempest
 2. unit tests
 3. turbo-hipster

 In Icehouse we were working toward Tempest with 3rd party CI and it's
 running against the WIP patch [2] already.

 Now the question is coming up about whether or not unit test and t-h are
 also requirements for inclusion in Juno.

 Obviously it's in everyone's best interest to run all three and get the
 most coverage possible to feel warm and fuzzy, but to be realistic I'd like
 to prioritize in the same order above, but then the question is if it's
 acceptable to stagger the UT and t-h testing.  A couple of points:

 1. The migration unit tests under nova.tests.db.api will/should cover new
 tables and wrinkles, but I'd argue that Tempest should already be testing
 new tables (and you're getting the biggest test in my experience which is
 actually running the DB migrations when setting up with 'nova-manage db
 sync').  So I consider UT a lower priority and therefore defer-able to a
 later release.

 2. t-h testing is also desirable, but (a) there are no other 3rd party CI
 systems running t-h like they do for Tempest and (b) t-h is only running
 MySQL today, not PostgreSQL which is already in tree.  Given that, I also
 consider t-h testing defer-able and lower priority than getting unit tests
 running against a DB2 backend.



 If we can agree on those priorities, then I'd like to figure out timelines
 and penalties, i.e. when would UT/t-h 3rd party CI be required for DB2, e.g.
 UT in K, t-h in H?  And if those deadlines aren't met, what's the penalty?
 Since the DB2 support is baked into the migration scripts, I'm not sure how
 you can really just rip it out like a virt driver.  The obvious thing to me
 is you just stop accepting any migration changes that are for DB2 support,
 so new migrations could be broken on DB2 and they don't get fixed until the
 CI requirements are met.  Any voting CI system would also be turned off from
 voting/reporting.

 This sounds reasonable to me.

This works for me. The biggest blocker to t-h is probably getting a
reasonable sized test dataset, so deferring that testing for a while
gives you a chance to get that built as well. As to a timeline, I'd
prefer to get the two deferred testing items in K than waiting for H.
H is a long time away...

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] Status of v3 tests in tempest

2014-05-20 Thread GHANSHYAM MANN
I agree to continue work on  bp/ nova-api-test-inheritance. As its reduce
the duplication code and later will help to remove the V2 tests easily.
V2.1 tests can be written on same design of inheritance.

-- 
Thanks
Ghanshyam Mann

On Mon, May 19, 2014 at 9:32 PM, Kenichi Oomichi
oomi...@mxs.nes.nec.co.jpwrote:

 Hi David,

  -Original Message-
  From: David Kranz [mailto:dkr...@redhat.com]
  Sent: Monday, May 19, 2014 6:49 PM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [qa][nova] Status of v3 tests in tempest
 
  It seems the nova team decided in Atlanta that v3 as currently
  understood is never going to exist:
  https://etherpad.openstack.org/p/juno-nova-v3-api.
 
  There are a number of patches in flight that tweak how we handle
  supporting both v2/v3 in tempest to reduce duplication.
  We need to decide what to do about this. At a minimum, I think we should
  stop any work that is inspired by any v3-related activity
  except to revert any v2/v3 integration that was already done. We should
  really rip out the v3 stuff that was recently added. I know Matt had
  some concern about that regarding testing v3 in stable/icehouse but
  perhaps he can say more.

 I agree to stop new Nova v3 tests and disable Nova v3 tests in
 the gate for icehouse.
 and I hope we continue working for reducing duplication between
 Nova v2/v3 tests on bp/ nova-api-test-inheritance. Because it
 would be nice for v2.1 API tests also for avoiding duplication
 between v2/v2.1 tests also.


 Thanks
 Ken'ichi Ohmichi


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Meeting Tuesday May 20th at 19:00 UTC

2014-05-20 Thread James E. Blair
Elizabeth K. Joseph l...@princessleia.com writes:

 On Mon, May 19, 2014 at 9:40 AM, Elizabeth K. Joseph
 l...@princessleia.com wrote:
 Hi everyone,

 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting on Tuesday May 20th, at 19:00 UTC in #openstack-meeting

 Great post-summit meeting, thanks to everyone who joined us.

Yes, great to see new people!

I'm sorry the open discussion period was short today.  It isn't always
like that, but sometimes is.

If you do have something to discuss that you want to make sure to get on
the agenda, feel free to add items by editing the wiki page here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale service --- Next steps and goals.

2014-05-20 Thread Susanne Balle
We have had some discussions around how to move forward with the LBaaS
service in OpenStack.  I am trying to summarize the key points below.


Feel free to chime in if I misrepresented anything or if you disagree :-)



For simplicity in the rest of the email and so I can differentiate between
all the LBaaS’s e.g. Neutron LBaaS, etc… I will name the new OpenStack
LBaaS project (that we discussed at the summit): Octavia in the rest of
this email. Note that this doesn’t mean we have agree on this name.



*Goal:*

We all want to create a best in class “operator scale” Octavia LBaaS
service to our customers.

Following requirements need to be considered (these are already listed in
some of the etherpads we have worked on)

· Provides scalability, failover, config management, and
provisioning.

· Architecture need to be pluggable so we can offer support for
HAProxy, Nginx, LVS, etc.



*Some disagreements exist around the scope of the new project:*



Some of the participating companies including HP are interested in a best
in class standalone Octavia load-balancer service that is part of OpenStack
and with the “label” OpenStack. http://www.openstack.org/software/

· The Octavia LBaaS project needs to work well with OpenStack or
this effort is not worth doing. HP believes that this should be the primary
focus.

· In this case the end goal would be to have a clean interface
between Neutron and the standalone Octavia LBaaS project and have the
Octavia LBaaS project become an incubated and eventual graduated OpenStack
project.

o   We would start out as a driver to Neutron.

o   This project would deprecate Neutron LBaaS long term since part of the
Neutron LBaaS would move over to the Octavia LBaaS project.

o   This project would continue to support both vendor drivers and new
software drivers e.g. ha-proxy, etc.

· Dougwig created the following diagram which gives a good overview
of my thinking: http://imgur.com/cJ63ts3 where Octavia is represented by
“New Driver Interface” and down. The whole picture shows how we could move
from the old to the new driver interface



Other participating companies want to create a best in class standalone
load-balancer service outside of OpenStack and only create a driver to
integrate with Openstack Neutron LBaaS.

· The Octavia LBaaS driver would be part of Neutron LBaaS tree
whereas the Octavia LBaaS implementation would reside outside OpenStack
e.g. Stackforge or github, etc.



The main issue/confusion is that some of us (HP LBaaS team) do not think of
projects in StackForge as OpenStack branded. HP developed  Libra LBaaS
which is open sourced in StackForge and when we tried to get it into
OpenStack we met resistance.



One person suggested the idea of designing the Octavia LBaaS service
totally independent of Neutron or any other service that calls. This might
makes sense for a general LBaaS service but given that we are in the
context of OpenStack this to me just makes the whole testing and developing
a nightmare to maintain and not necessary. Again IMHO we are developing and
delivering Octavia in the context of OpenStack so the Octavia LBaaS  should
just be super good at dealing with the OpenStack env. The architecture can
still be designed to be pluggable but my experiences tell me that we will
have to make decision and trade-offs and at that point we need to remember
that we are doing this in the context of OpenStack and not in the general
context.



*How do we think we can do it?*



We have some agreement around the following approach:



· To start developing the driver/Octavia implementation in
StackForge which should allow us to increase the velocity of our
development using the OpenStack CI/CD tooling (incl. jenkins) to ensure
that we test any change. This will allow us to ensure that changes to
Neutron do not break our driver/implementation as well as the other way
around.

o   We would use Gerrit for blueprints so we have documented reviews and
comments archived somewhere.

o   Contribute patches regularly into the Neutron LBaaS tree:

§  Kyle has volunteered himself and one more core team members to review
and help move a larger patch into Neutron tree when needed. It was also
suggested that we could do milestones of smaller patches to be merged into
Neutron LbaaS. The latter approach was preferred by most participants.



The main goal behind this approach is to make sure we increase velocity
while still maintaining a good code/design quality. The OpenStack tooling
has shown to work for large distributed virtual teams so let's take
advantage of it.

Carefully planning the various transitions.



Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale service --- Next steps and goals.

2014-05-20 Thread Brandon Logan
Hi Susanne,

One comment in line.

On Tue, 2014-05-20 at 19:12 -0400, Susanne Balle wrote:
 We have had some discussions around how to move forward with the LBaaS
 service in OpenStack.  I am trying to summarize the key points below.
 
 
 Feel free to chime in if I misrepresented anything or if you
 disagree :-)
 
  
 
 For simplicity in the rest of the email and so I can differentiate
 between all the LBaaS’s e.g. Neutron LBaaS, etc… I will name the new
 OpenStack LBaaS project (that we discussed at the summit): Octavia in
 the rest of this email. Note that this doesn’t mean we have agree on
 this name.
 
  
 
 Goal:
 
 We all want to create a best in class “operator scale” Octavia LBaaS
 service to our customers.
 
 Following requirements need to be considered (these are already listed
 in some of the etherpads we have worked on)
 
 · Provides scalability, failover, config management, and
 provisioning.
 
 · Architecture need to be pluggable so we can offer support
 for HAProxy, Nginx, LVS, etc.
 
  
 
 Some disagreements exist around the scope of the new project:
 
  
 
 Some of the participating companies including HP are interested in a
 best in class standalone Octavia load-balancer service that is part of
 OpenStack and with the “label”
 OpenStack. http://www.openstack.org/software/
 
 · The Octavia LBaaS project needs to work well with OpenStack
 or this effort is not worth doing. HP believes that this should be the
 primary focus.
 
 · In this case the end goal would be to have a clean interface
 between Neutron and the standalone Octavia LBaaS project and have the
 Octavia LBaaS project become an incubated and eventual graduated
 OpenStack project.
 
 o   We would start out as a driver to Neutron.
 
 o   This project would deprecate Neutron LBaaS long term since part of
 the Neutron LBaaS would move over to the Octavia LBaaS project.
 
 o   This project would continue to support both vendor drivers and new
 software drivers e.g. ha-proxy, etc.
 
 · Dougwig created the following diagram which gives a good
 overview of my thinking: http://imgur.com/cJ63ts3 where Octavia is
 represented by “New Driver Interface” and down. The whole picture
 shows how we could move from the old to the new driver interface
 
  
 
 Other participating companies want to create a best in class
 standalone load-balancer service outside of OpenStack and only create
 a driver to integrate with Openstack Neutron LBaaS.  
 
 · The Octavia LBaaS driver would be part of Neutron LBaaS tree
 whereas the Octavia LBaaS implementation would reside outside
 OpenStack e.g. Stackforge or github, etc.

I don't think they want Octavia LBaaS to be in stackforge, just the
HA/scalable provider implementation (HaProxy, Nginx, LVS, etc).  So the
API would still be the frontend of the Octavia LBaaS (which would be an
OpenStack project), but the API would call a driver (that still needs to
be created as well) that was written to talk to this HA/scalable
provider implementation.  The actual code for this HA/scalable provider
would exist in stackforge, much like a vendor (radware, netscaler, etc)
would not put their code in the Octavia LBaaS's tree.

Just thought I'd clear that up so we can get other people's thoughts on
this.
 
  
 
 The main issue/confusion is that some of us (HP LBaaS team) do not
 think of projects in StackForge as OpenStack branded. HP developed
  Libra LBaaS which is open sourced in StackForge and when we tried to
 get it into OpenStack we met resistance.
 
  
 
 One person suggested the idea of designing the Octavia LBaaS service
 totally independent of Neutron or any other service that calls. This
 might makes sense for a general LBaaS service but given that we are in
 the context of OpenStack this to me just makes the whole testing and
 developing a nightmare to maintain and not necessary. Again IMHO we
 are developing and delivering Octavia in the context of OpenStack so
 the Octavia LBaaS  should just be super good at dealing with the
 OpenStack env. The architecture can still be designed to be pluggable
 but my experiences tell me that we will have to make decision and
 trade-offs and at that point we need to remember that we are doing
 this in the context of OpenStack and not in the general context.
 
  
 
 How do we think we can do it?
 
  
 
 We have some agreement around the following approach:
 
  
 
 · To start developing the driver/Octavia implementation in
 StackForge which should allow us to increase the velocity of our
 development using the OpenStack CI/CD tooling (incl. jenkins) to
 ensure that we test any change. This will allow us to ensure that
 changes to Neutron do not break our driver/implementation as well as
 the other way around.
 
 o   We would use Gerrit for blueprints so we have documented reviews
 and comments archived somewhere.
 
 o   Contribute patches regularly into the Neutron LBaaS tree:
 
 §  Kyle has volunteered himself and one 

Re: [openstack-dev] [python-openstacksdk] Meeting Minutes - 2014-05-20

2014-05-20 Thread Joe Gordon
On Tue, May 20, 2014 at 1:37 PM, Brian Curtin br...@python.org wrote:

 Next meeting will be 2014-05-27

 Minutes:
 http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-05-20-19.00.html


These meeting minutes are very sparse.




 Minutes (text):

 http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-05-20-19.00.txt

 Log:
 http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-05-20-19.00.log.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [Heat][Documentation] Heat template documentation

2014-05-20 Thread Steve Baker
On 21/05/14 02:31, Doug Hellmann wrote:
 On Fri, May 16, 2014 at 2:10 PM, Gauvain Pocentek
 gauvain.pocen...@objectif-libre.com wrote:
 Le 2014-05-16 17:13, Anne Gentle a écrit :

 On Thu, May 15, 2014 at 10:34 AM, Gauvain Pocentek
 gauvain.pocen...@objectif-libre.com wrote:

 Hello,

 This mail probably mainly concerns the doc team, but I guess that the
 heat team wants to know what's going on.

 We've shortly discussed the state of heat documentation with Anne Gentle
 and Andreas Jaeger yesterday, and I'd like to share what we think would be
 nice to do.

 Currently we only have a small section in the user guide that describes
 how to start a stack, but nothing documenting how to write templates. The
 heat developer doc provides a good reference, but I think it's not easy to
 use to get started.

 So the idea is to add an OpenStack Orchestration chapter in the user
 guide that would document how to use a cloud with heat, and how to write
 templates.

 I've drafted a spec to keep track of this at [0].

 I'd like to experiment a bit with converting the End User Guide to an
 easier markup to enable more contributors to it. Perhaps bringing in
 Orchestration is a good point to do this, plus it may help address the
 auto-generation Steve mentions.

 The loss would be the single sourcing of the End User Guide and Admin
 User Guide as well as loss of PDF output and loss of translation. If
 these losses are worthwhile for easier maintenance and to encourage
 contributions from more cloud consumers, then I'd like to try an
 experiment with it.

 Using RST would probably make it easier to import/include the developers'
 documentation. But I'm not sure we can afford to loose the features you
 mention. Translations for the user guides are very important I think.
 Sphinx does appear to have translation support:
 http://sphinx-doc.org/intl.html?highlight=translation

 I've never used the feature myself, so I don't know how good the workflow is.

 Sphinx will generate PDFs, though the LaTeX output is not as nice
 looking as what we get now. There's also a direct-to-pdf builder that
 uses rst2pdf that appears to support templates, so that might be an
 easier path to producing something attractive:
 http://ralsina.me/static/manual.pdf
I attempted to make latexpdf on the heat sphinx docs and fell down a
latex tool-chain hole.

I tried adding rst2pdf support to the sphinx docs build:
https://review.openstack.org/#/c/94491/

and the results are a reasonable start:
https://drive.google.com/file/d/0B_b9ckHiNkjVS3ZNZmNXMkJkWE0/edit?usp=sharing

 How would we review changes made in external repositories? The user guides
 are continuously published, this means that a change done in the heat/docs/
 dir would quite quickly land on the webserver without a doc team review. I
 completely trust the developers, but I'm not sure that this is the way to
 go.


 The experiment would be to have a new repo set up,
 openstack/user-guide and use the docs-core team as reviewers on it.
 Convert the End User Guide from DocBook to RST and build with Sphinx.
 Use the oslosphinx tempate for output. But what I don't know is if
 it's possible to build the automated output outside of the
 openstack/heat repo, does anyone have interest in doing a proof of
 concept on this?

 I'm not sure that this is possible, but I'm no RST expert.
 I'm not sure this quite answers the question, but the RST directives
 for auto-generating docs from code usually depend on being able to
 import the code. That means heat and its dependencies would need to be
 installed on the system where the build is performed. We accomplish
 this in the dev doc builds by using tox, which automatically handles
 the installation as part of setting up the virtualenv where the build
 command runs.
I'm sure we could do a git checkout of heat during the docs build, and
even integrate that with gating. I thought this was already happening
for some docbook builds, but I can't find any examples now.

 I'd also like input on the loss of features I'm describing above. Is
 this worth experimenting with?

 Starting this new book sounds like a lot of work. Right now I'm not
 convinced it's worth it.


How about this for a suggestion. The Heat template authoring guide is
potentially so large and different that it deserves to be in its own
document. It is aimed at users, but there is so much potential content
hidden in the template format that it wouldn't necessarily belong in the
current user guide.

We could start a new doc repo which is a sphinx-based template authoring
guide. It will have a bunch of manually written content plus resource
reference built from a heat git checkout.

If this all works out then we can consider adding the user guide content
to the heat template authoring guide, resulting in a new merged
sphinx-based user guide.

Opinions?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [oslo][nova] how to best deal with default periodic task spacing behavior

2014-05-20 Thread Joe Gordon
On Tue, May 20, 2014 at 8:00 AM, Davanum Srinivas dava...@gmail.com wrote:

 @Matt,

 Agree, My vote would be to change existing behavior.


Same.  I think its reasonable to say the current behavior is not ideal
(otherwise we wouldn't have changed it), and that the new behavior is
better for the vast majority of folks.



 -- dims

 On Tue, May 20, 2014 at 10:15 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
  Between patch set 1 and patch set 3 here [1] we have different solutions
 to
  the same issue, which is if you don't specify a spacing value for
 periodic
  tasks then they run whenever the periodic task processor runs, which is
  non-deterministic and can be staggered if some tasks don't complete in a
  reasonable amount of time.
 
  I'm bringing this to the mailing list to see if there are more opinions
 out
  there, especially from operators, since patch set 1 changes the default
  behavior to have the spacing value be the DEFAULT_INTERVAL (hard-coded 60
  seconds) versus patch set 3 which makes that behavior configurable so the
  admin can set global default spacing for tasks, but defaults to the
 current
  behavior of running every time if not specified.
 
  I don't like a new config option, but I'm also not crazy about changing
  existing behavior without consensus.
 
  [1] https://review.openstack.org/#/c/93767/
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]LBaaS 2nd Session etherpad

2014-05-20 Thread Carlos Garza
I'm reading through the https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL 
docs as well as the https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7
document that your referencing below and I think who ever wrote the documents 
may have misunder stood the Association between X509 certificates and Private 
and public Keys.
I think we should clean those up and unambiguously declare that.

A certificate shall be defined as a PEM encoded X509 certificate.
For example

Certificate:
-BEGIN CERTIFICATE-
   blah blah blah base64 stuff goes here
-END CERTIFICATE-

A private key shall be a PEM encoded private key that may or may not 
necessarily be an RSA key. For example it could be
a curve key but most likely it will be RSA



a public-key shall mean an actual Pem encoded public key and not the x509 
certificate that contains it. example
-BEGIN PUBLIC KEY-
bah blah blah base64 stuff goes here
-END PUBLIC KEY-

A Private key shall mean a PEM encoded private key.
Example
-BEGIN RSA PRIVATE KEY-
blah blah blah base64 goes here.
-END RSA PRIVATE KEY-

Also the same key could be encoded as pkcs8

-BEGIN PRIVATE KEY-
base64 stuff here
-END PRIVATE KEY-

I would think that we should allow for PKCS8 so that users are not restricted 
to PKCS1 RSA keys via BEGIN PRIVATE KEY. I'm ok with forcing the user to not 
use PKCS8 to send both
the certificate and key.

There seems to be confusion in the neutron-lbaas-ssl-i7 ether pad doc as well 
as the doc at URL https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7
The confusion being that the term public key and certificate are being used 
interchangeably.

For example in the wiki page?
under Resource change:
SSL certidficate(new) declares

certificate_chain : list of PEM-formatted public keys, not mandatory
This should be changed to
certificate_chain: list of PEM-formatted x509 certificates, not mandatory

Also in the CLI portion of the doc their are entries like
neutron ssl-certificate-create --public-key CERTIFICATE-FILE --private-key 
PRIVATE-KEY-FILE --passphrase PASSPHRASE --cert-chain 
INTERMEDIATE-KEY-FILE-1, INTERMEDIATE-KEY-FILE-2 certificate name
The option --public-key should be changed to --cert since it specifies the 
X509. Also the names INTERMEDIATE-KEY-FILE-1 etc should be changed to 
INTERMEDIATE-CERT-FILE-1 since these are x509s and not certs.


The below line mass no sense to me.
neutron ssl-trusted-certificate-create --key PUBLIC-KEY-FILE key name

Are you truing to give the certificate a name? We also will never need to work 
with public keys in general as the public key can be extracted from the x509 or 
the private key file.
Or was the intent to use ssl-trusted-certificates to specify the private keys 
that the Loadbalancer will use when communicating with back end servers that 
are doing client auth?

the rational portion of the doc is declaring that trusted certificates are for 
back end encryption but don't mention if this is for client auth either. Was 
the intent to use a specific key for the SSL session between the load balancer 
and the back end server or was the intention to advertise the client vert to 
the backend server so the the back end server can authenticate with what ever 
CA it(the server) trusts.

in either case both the private key and the certificate or chain should be used 
in this configuration since the loadbalancer needs the private key during the 
SSL session.
the command should look something alone the lines of
neutron ssl-trusted-certificate-create --key PRIVATE_KEY_FILE --cert 
CERTIFICATE-file.


I would like to help out with this but I need to know the intent of the 
person that initially interchanged the terms key and certificate, and its much 
better to fix this sooner then later.


On May 15, 2014, at 10:58 PM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi Everyone,

https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7

Feel free to modify and update, please make sure you use your name so we will 
know who have added the modification.

Regards,
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Meeting Minutes - 2014-05-20

2014-05-20 Thread Brian Curtin
On Tue, May 20, 2014 at 6:43 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Tue, May 20, 2014 at 1:37 PM, Brian Curtin br...@python.org wrote:

 Next meeting will be 2014-05-27

 Minutes:
 http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-05-20-19.00.html


 These meeting minutes are very sparse.

Sorry about that. The meeting started with light attendance and was
more conversational, then I forgot about MeetBot it came time to shut
it off. Future meetings will go back to using the appropriate commands
to generate useful minutes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] - Integration with neutron using external attachment point

2014-05-20 Thread Kevin Benton
Hi Devananda,

Most of this should work fine. The only problem part is handling the
servers that are first being booted and have never been connected to
Ironic. Neutron doesn't have control over the default network that all
un-provisioned switch ports should be a member of. Even if we added support
for this, the management network that you would likely want them to be on
is normally a network not known to Neutron.

For that workflow to work, I think the switch-ports should be manually
configured to be in the management VLAN by default. Then the servers will
be able to boot up and receive their PXE image from ironic, etc. Once
Ironic will create an external attachment point using the information
learned from LLDP. It's then up to the backend implementation to assure
that when that external attachment point isn't associated to a specific
neutron network that it will be in the default network it was configured in
to begin with.

The workflow would then be:
1. Admin puts all switch ports that might have Ironic servers plugged into
them into the management network.
2. A new Ironic server is plugged in and successfully boots to management
network and learns it's switch ID/port from LLDP.
3. The Ironic management server makes a call to Neutron to create an
external attachment point using the switch ID/port received from the new
server.
4. When the server is being assigned to a tenant, Ironic passes the
external attachment ID to Nova, which adds it to the neutron port creation
request.
5. Neutron will then assign the external attachment point to the network in
the port creation request, at which point the backend will be triggered to
configure the switch-port for appropriate VLAN access, etc.
6. Once the server is terminated, Ironic will remove the network ID from
the external attachment point, which will instruct the Neutron backend to
return the port to the default VLAN it was in before. In this case it would
be the management VLAN and it would be back on the appropriate network for
provisioning again.

Does that make sense?

Thanks,
Kevin Benton



On Tue, May 20, 2014 at 9:48 AM, Devananda van der Veen 
devananda@gmail.com wrote:

 Hi Kevin!

 I had a few conversations with folks at the summit regarding this. Broadly
 speaking, yes -- this integration would be very helpful for both discovery
 and network/tenant isolation at the bare metal layer.

 I've left a few comments inline


 On Mon, May 19, 2014 at 3:52 PM, Kevin Benton blak...@gmail.com wrote:

 Hello,

 I am working on an extension for neutron to allow external attachment
 point information to be stored and used by backend plugins/drivers to place
 switch ports into neutron networks[1].

 One of the primary use cases is to integrate ironic with neutron. The
 basic workflow is that ironic will create the external attachment points
 when servers are initially installed.


 This also should account for servers that are already racked, which Ironic
 is instructed to manage. These servers would be booted into a discovery
 state, eg. running ironic-python-agent, and hardware information
 (inventory, LLDP data, etc) could be sent back to Ironic.

 To do this, nodes not yet registered with Ironic will need to be PXE
 booted on a common management LAN (either untagged VLAN or a specific
 management VLAN), which can route HTTP(S) and TFTP traffic to an instance
 of ironic-api and ironic-conductor services. How will the routing be done
 by Neutron for unknown ports?


 This step could either be automated (extract switch-ID and port number of
 LLDP message) or it could be manually performed by an admin who notes the
 ports a server is plugged into.


 Ironic could extract info from LLDP if the machine has booted into the
 ironic-python-agent ramdisk and is able to communicate with Ironic
 services. So it needs to be networked /before/ it's enrolled with Ironic.
 If that's possible -- great. I believe this is the workflow that the IPA
 team intends to follow.

 Setting it manually should also, of course, be possible, but less
 manageable with large numbers of servers.



 Then when an instance is chosen for assignment and the neutron port needs
 to be created, the creation request would reference the corresponding
 attachment ID and neutron would configure the physical switch port to place
 the port on the appropriate neutron network.


 Implementation question here -- today, Nova does the network attachment
 for instances (or at least, Nova initiates the calls out to Neutron).
 Ironic can expose this information to Nova and allow Nova to coordinate
 with Neutron, or Ironic can simply call out to Neutron, as it does today
 when setting the dhcp extra options. I'm not sure which approach is better.


 Cheers,
 Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton

Re: [openstack-dev] [Ironic][Neutron] - Integration with neutron using external attachment point

2014-05-20 Thread Kevin Benton
Hi Russell,

Thanks for sharing this. I introduced this as an extension so it can
hopefully be used by ML2 and any other plugins by including the mixin.

I have a couple of questions about the workflow you described:

1. Nova calls Neutron to create a virtual port. Because this happens
_before_ Nova touches the virt driver, the port is at this point identical
to one created for a virtual server.
6. Ironic updates each of the customer's Neutron ports with information
about its physical switch port.

To reduce API calls did you look to see if it was possible to wait to
create the neutron port until the information from Ironic was available? Or
is port creation long before ironic is called?

5. Ironic deletes a Neutron port that was created at bootstrap time to
trunk the physical switch ports for provisioning.

What is the process for creating this port in the first place? Is your
management network that is used to provision Ironic instances known to
Neutron?

Thanks,
Kevin Benton



On Tue, May 20, 2014 at 3:01 PM, Russell Haering
russellhaer...@gmail.comwrote:

 We've been experimenting some with how to use Neutron with Ironic here at
 Rackspace.

 Our very experimental code:
 https://github.com/rackerlabs/ironic-neutron-plugin

 Our objective is the same as what you're describing, to allow Nova servers
 backed by Ironic to attach to arbitrary Neutron networks. We're initially
 targeting VLAN-based networks only, but eventually want to do VXLAN from
 the top-of-rack switches, controlled via an SDN controller.

 Our approach is a little different than what you're describing though. Our
 objective is to modify the existing Nova - Neutron interaction as little
 as possible, which means approaching the problem by thinking how would an
 L2 agent do this?.

 The workflow looks something like:

 1. Nova calls Neutron to create a virtual port. Because this happens
 _before_ Nova touches the virt driver, the port is at this point identical
 to one created for a virtual server.
 2. Nova executes the spawn method of the Ironic virt driver, which makes
 some calls to Ironic.
 3. Inside Ironic, we know about the physical switch ports that the
 selected Node is connected to. This information is discovered early-on
 using LLDP and stored in the Ironic database.
 4. We actually need the node to remain on an internal provisioning VLAN
 for most of the provisioning process, but once we're done with on-host work
 we turn the server off.
 5. Ironic deletes a Neutron port that was created at bootstrap time to
 trunk the physical switch ports for provisioning.
 6. Ironic updates each of the customer's Neutron ports with information
 about its physical switch port.
 6. Our Neutron extension configures the switches accordingly.
 7. Then Ironic brings the server back up.

 The destroy process basically does the reverse. Ironic removes the
 physical switch mapping from the Neutron ports, re-creates an internal
 trunked port, does some work to tear down the server, then passes control
 back to Nova. At that point Nova can do what it wants with the Neutron
 ports. Hypothetically that could include allocating them to a different
 Ironic Node, etc, although in practice it just deletes them.

 Again, this is all very experimental in nature, but it seems to work
 fairly well for the use-cases we've considered. We'd love to find a way to
 collaborate with others working on similar problems.

 Thanks,
 Russell


 On Tue, May 20, 2014 at 7:17 AM, Akihiro Motoki amot...@gmail.com wrote:

 # Added [Neutron] tag as well.

 Hi Igor,

 Thanks for the comment. We already know them as I commented
 in the Summit session and ML2 weekly meeting.
 Kevin's blueprint now covers Ironic integration and layer2 network gateway
 and I believe campus-network blueprint will be covered.

 We think the work can be split into generic API definition and
 implementations
 (including ML2). In external attachment point blueprint review, API
 and generic topics are mainly discussed so far and the detail
 implementation is not discussed
 so much yet. ML2 implementation detail can be discussed later
 (separately or as a part of the blueprint review).

 I am not sure what changes proposed in Blueprint [1].
 AFAIK SDN/OpenFlow controller based approach can support this,
 but how can we archive this in the existing open source implementation.
 I am also interested in the ML2 implementation detail.

 Anyway more input will be appreciated.

 Thanks,
 Akihiro

 On Tue, May 20, 2014 at 7:13 PM, Igor Cardoso igordc...@gmail.com
 wrote:
  Hello Kevin.
  There is a similar Neutron blueprint [1], originally meant for Havana
 but
  now aiming for Juno.
  I would be happy to join efforts with you regarding our blueprints.
  See also: [2].
 
  [1] https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
  [2] https://blueprints.launchpad.net/neutron/+spec/campus-network
 
 
  On 19 May 2014 23:52, Kevin Benton blak...@gmail.com wrote:
 
  Hello,
 
  I am working on an 

  1   2   >