Re: [openstack-dev] [Openstack-operators] [tc][all][osprofiler] OSprofiler is dead, long live OSprofiler

2015-11-09 Thread Daniel P. Berrange
On Mon, Nov 09, 2015 at 02:57:37AM -0800, Boris Pavlovic wrote:
> Hi stackers,
> 
> Intro
> ---
> 
> It's not a big secret that OpenStack is huge and complicated ecosystem of
> different
> services that are working together to implement OpenStack API.
> 
> For example booting VM is going through many projects and services:
> nova-api, nova-scheduler, nova-compute, glance-api, glance-registry,
> keystone, cinder-api, neutron-api... and many others.
> 
> The question is how to understand what part of the request takes the most
> of the time and should be improved. It's especially interested to get such
> data under the load.
> 
> To make it simple, I wrote OSProfiler which is tiny library that should be
> added to all OpenStack
> projects to create cross project/service tracer/profiler.
> 
> Demo (trace of CLI command: nova boot) can be found here:
> http://boris-42.github.io/ngk.html
> 
> This library is very simple. For those who wants to know how it works and
> how it's integrated with OpenStack take a look here:
> https://github.com/openstack/osprofiler/blob/master/README.rst
> 
> What is the current status?
> ---
> 
> Good news:
> - OSprofiler is mostly done
> - OSprofiler is integrated with Cinder, Glance, Trove & Ceilometer
> 
> Bad news:
> - OSprofiler is not integrated in a lot of important projects: Keystone,
> Nova, Neutron
> - OSprofiler can use only Ceilometer + oslo.messaging as a backend
> - OSprofiler stores part of arguments in api-paste.ini part in project.conf
> which is terrible thing
> - There is no DSVM job that check that changes in OSprofiler don't break
> the projects that are using it
> - It's hard to enable OSprofiler in DevStack
> 
> Good news:
> I spend some time and made 4 specs that should address most of issues:
> https://github.com/openstack/osprofiler/tree/master/doc/specs
> 
> Let's make it happen in Mitaka!
> 
> Thoughts?
> By the way somebody would like to join this effort?)

I'm very interested in seeing this kind of capability integrated across
openstack. I've really needed it in working in Nova for many times.
6 months back or so (when I didn't know osprofiler existed), I hacked up
a roughly equivalent library for openstack:

   https://github.com/berrange/oslo.devsupport

I never had time to persue this further and then found out about osprofiler
so it dropped in priority for me.

Some notable things I think I did differently

 - Used oslo.versionedobjects for recording the data to provide a
   structured data model with easy support for future extension,
   and well defined serialization format

 - Structured data types for all the various common types of operation
   to instrument (database request, RPC call, RPC dispatch, REST call
   REST dispatch, native library call, thread spawn, thread main,
   external command spawn). This is to ensure all apps using the library
   provide the same set of data for each type of operation.

 - Ability to capture stack traces against each profile point to
   allow creation of control flow graphs showing which code paths
   consumed significant time.

 - Abstract "consumer" API for different types of persistence backend.
   Rather than ceilometer, my initial consumer just serialized to
   plain files in well known directories, using oslo.versionedobjects
   serialization format. I can see ceilometer might be nice for
   production deployments, but plain files was simpler for developer
   environments which might not even be running ceilometer

 - Explicit tracking of nesting of instrunmented operation had a parent
   operation. At the top level was things like thread main, RPC dispatch
   and REST dispatch. IIUC, with osprofiler you could potentially infer
   the nesting by sorting based on start/end timestamps, but I felt an
   explicit representation was nicer to work with from the object
   model POV.

My approach would require oslo.devsupport to be integrated into the
oslo.concurrency, oslo.db, oslo.messaging components, as well as the
various apps. I did quick hacks to enable this for Nova & pieces it
uses you can see from these (non-upstreamed) commits:

  
https://github.com/berrange/oslo.concurrency/commit/2fbbc9cf4f23c5c2b30ff21e9e06235a79edbc20
  
https://github.com/berrange/oslo.messaging/commit/5a1fd87650e56a01ae9d8cc773e4d030d84cc6d8
  
https://github.com/berrange/nova/commit/3320b8957728a1acf786296eadf0bb40cb4df165

I see you already intend to make ceilometer optional and allow other
backends, so that's nice. I would love to see Osprofiler take on some
of the other ideas I had in my alternative, particularly the data model
based around oslo.versionedobjects to provide standard format for various
core types of operation, and the ability to record stack traces.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- 

Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-09 Thread Duncan Thomas
On 9 November 2015 at 09:04, Zhenyu Zheng  wrote:

>  And Nova side also doesn't support detaching root device, that means we
> cannot performing volume backup/restore from cinder side, because those
> actions needs the volume in "available" status.
>
>

It might be of interest to note that volume snapshots have always worked on
attached volumes, and as of liberty, the backup operation now supports a
--force=True option that does a backup of a live volume (via an internal
snapshot, so it should be crash consistent)


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] summarizing the cross-project summit session on Mitaka themes

2015-11-09 Thread Thierry Carrez
Doug Hellmann wrote:
> One thing I forgot to mention in my original email was the discussion
> about when we would have this themes conversation for the N cycle.
> I had originally hoped we would discuss the themes online before
> the summit, and that those would inform decisions about summit
> sessions. Several other folks in the room made the point that we
> were unlikely to come up with a theme so surprising that we would
> add or drop a summit session from any existing planning, so having
> the discussion in person at the summit to add background to the
> other sessions for the week was more constructive.

So I was stuck in Design Summit 101 during this session and couldn't
attend. Saying that discussing common themes before summit planning is
unlikely to change design summit session contents strikes me as odd.
That's equivalent to saying that the right themes are picked anyway.
That may be the case for some projects, but certainly not the case for
all projects, otherwise we wouldn't be having that discussion to begin
with...

Personally I think we need to have the cycle themes discussion before
the design summit so that it can really influence what gets discussed
there and ends up in real action items. Adding background at the last
minute to already-decided session topics is just not enough to trigger
real progress in those key areas.

Why can't we have that discussion on the ML in the second half of the
cycle, between the midcycle sprints and the summit planning ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] notification subteam meeting

2015-11-09 Thread Balázs Gibizer
Hi, 

The first meeting of the nova notification subteam will happen 2015-11-10 
Tuesday 20:00 UTC [1] on #openstack-meeting-alt on freenode 

Agenda:
 - Agree on the mission and the tasks of the team
 - Status of the outstanding specs
 - Q/A
 - AOB

See you there.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20151110T20 
[2] https://wiki.openstack.org/wiki/Meetings/NovaNotification 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack]host not reachable with iptables reject after init

2015-11-09 Thread Wilence Yao
Hi all,
After I run devstack/stack.sh completely, I found that api is not
reachable. After some check, I found some iptables rules cause the problem:

```
Chain INPUT (policy ACCEPT)
target prot opt source   destination
nova-network-INPUT  all  --  0.0.0.0/00.0.0.0/0
neutron-openvswi-INPUT  all  --  0.0.0.0/00.0.0.0/0
nova-api-INPUT  all  --  0.0.0.0/00.0.0.0/0
ACCEPT udp  --  0.0.0.0/00.0.0.0/0udp dpt:53
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0tcp dpt:53
ACCEPT udp  --  0.0.0.0/00.0.0.0/0udp dpt:67
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0tcp dpt:67
ACCEPT all  --  0.0.0.0/00.0.0.0/0state
RELATED,ESTABLISHED
ACCEPT icmp --  0.0.0.0/00.0.0.0/0
ACCEPT all  --  0.0.0.0/00.0.0.0/0
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0state NEW tcp
dpt:22
REJECT all  --  0.0.0.0/00.0.0.0/0reject-with
icmp-host-prohibited
```

The last  two rules reject all access to the host except port 22(ssh). Why
should devstack add this two rules in host?

Wilence Yao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovs-dpdk]

2015-11-09 Thread Samta Rangare
Hello Everyone,

I am installing devstack with networking-ovs-dpdk. The local.conf
exactly looks like the one is available in
/opt/stack/networking-ovs-dpdk/doc/source/_downloads/local.conf.single_node.
So I believe all the necessary configuration will be taken care.

However I am stuck at place where devstack is trying to set external-id
($ sudo ovs-vsctl br-set-external-id br-ex bridge-id br-ex). As soon
as it hits at this place it's just hangs forever. I tried commenting
this line from lib/neutron_plugin/ml2 (I know this is wrong) and then
all services came up except ovs-dpdk agent and ovs agent.

BTW I am deploying it in ubuntu 14.04. Any pointer will be really helpful.

Thanks,
Samta

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API sub-team meeting

2015-11-09 Thread Ed Leafe
On Nov 9, 2015, at 7:11 AM, Alex Xu  wrote:
> 
> We have weekly Nova API meeting this week. The meeting is being held Tuesday 
> UTC1200.
> 
> In other timezones the meeting is at:
> 
> EST 08:00 (Tue)

Just to clarify: that's EDT 07:00, since daylight savings ended in the US last 
week.

> Japan 21:00 (Tue)
> China 20:00 (Tue)
> United Kingdom 13:00 (Tue)


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-09 Thread Doug Hellmann
Excerpts from Lingxian Kong's message of 2015-11-09 15:00:25 +0800:
> On Wed, Nov 4, 2015 at 3:46 AM, Doug Hellmann  wrote:
> > 1. We need one patch to the master branch of the project to add the
> >instructions for publishing the notes as part of the project
> >sphinx documentation build.  An example patch for Glance is in
> >[2].
> >
> 
> Hi, Doug,
> 
> I am gonna do this in Mistral project, but I wonder how
> releasenotes/source/conf.py is generated? since I found there are some
> contents specific to Glance.
> 

I used the sphinx-quickstart program to generate that file. You could
copy one from an existing patch and change the parts that are not
relevant to your project.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-09 Thread Doug Hellmann
Excerpts from Andreas Jaeger's message of 2015-11-07 16:42:40 +0100:
> On 11/03/2015 08:46 PM, Doug Hellmann wrote:
> > [...]
> > We are ready to start rolling out Reno for use with Liberty stable
> > branch releases and in master for the Mitaka release. We need the
> > release liaisons to create and merge a few patches for each project
> > between now and the Mitaka-1 milestone.
> >
> > 1. We need one patch to the master branch of the project to add the
> > instructions for publishing the notes as part of the project
> > sphinx documentation build.  An example patch for Glance is in
> > [2].
> >
> > 2. We need another patch to the stable/liberty branch of the project
> > to set up Reno and introduce the first release note for that
> > series. An example patch for Glance is in [3].
>  > [...]
> 
> What shall we do with projects that do not have a stable/liberty? For 
> example openstack-doc-tools does not have one and we hope we don't need one?

There are 2 ways to handle projects without the stable branch. You
could simply write the notes in a single rst file and skip using
reno. The point of reno is to make backporting notes easier, but
if you aren't doing backports that's not necessary.

If you want to use reno, just set up a page that lists the releases
from the master branch.

> 
> Does the following look ok?
> https://review.openstack.org/242748

That's a good start. You could replace the contents of
releasenotes/source/index.rst with something like

  .. release-notes::
 :branch: origin/master

> 
> Also, what do you think of adding this to openstack-manuals? We do not 
> tag releases for the manuals but release continously. Will the tools 
> work fine here?

reno depends on the tags to know how to organize the notes. If you're
not tagging versions of the manuals, you won't get much benefit from it.
We should talk about alternative ways to build a good changelog for the
manuals, if you want to try to automate some of that.

Doug

> 
> Andreas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-09 Thread Dmitry Tantsur

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack 
baremetal" and "openstack baremetal introspection" namespaces belonging 
to ironic and ironic-inspector accordingly. TL;DR of this email is to 
deprecate them and move to TripleO-specific namespaces. Read on to know why.


Problem
===

I realized that we're doing a wrong thing when people started asking me 
why "baremetal introspection start" and "baremetal introspection bulk 
start" behave so differently (the former is from ironic-inspector, the 
latter is from tripleoclient). The problem with TripleO commands is that 
they're highly opinionated workflows commands, but there's no way a user 
can distinguish them from general-purpose ironic/ironic-inspector 
commands. The way some of them work is not generic enough ("baremetal 
import"), or uses different defaults from an upstream project 
("configure boot"), or does something completely unacceptable upstream 
(e.g. the way "introspection bulk start" deals with node states).


So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

 This command assumes there's an "baremetal instackenv" object, while 
instackenv is a tripleo-specific file format.


2. baremetal import

 This command supports a limited subset of ironic drivers and driver 
properties, only those known to os-cloud-config.


3. baremetal introspection bulk start

 This command does several bad (IMO) things:
 a. Messes with ironic node states
 b. Operates implicitly on all nodes (in a wrong state)
 c. Defaults to polling

4. baremetal show capabilities

 This is the only commands that is generic enough and could actually 
make it to ironicclient itself.


5. baremetal introspection bulk status

 See "bulk start" above.

6. baremetal configure ready state

 First of all, this and the next command use "baremetal configure" 
prefix. I would not promise we'll never start using it in ironic, 
breaking the whole TripleO.


 Seconds, it's actually DELL-specific.

7. baremetal configure boot

 This one is nearly ok, but it defaults to local boot, which is not an 
upstream default. Default values for images may not work outside of 
TripleO as well.


Proposal


As we already have "openstack undercloud" and "openstack overcloud" 
prefixes for TripleO, I suggest we move these commands under "openstack 
overcloud nodes" namespace. So we end up with:


 overcloud nodes import
 overcloud nodes configure ready state --drac
 overcloud nodes configure boot

As you see, I require an explicit --drac argument for "ready state" 
command. As to the remaining commands:


1. baremetal introspection status --all

  This is fine to move to inspector-client, as inspector knows which 
nodes are/were on introspection. We'll need a new API though.


2. baremetal show capabilities

  We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

  I believe that we need to make 2 things explicit in this replacement 
for "introspection bulk status": polling and operating on "available" nodes.


4. overcloud nodes import --dry-run

  could be a replacement for "baremetal instackenv validate".


Please let me know what you think.

Cheers,
Dmitry.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-09 Thread Czesnowicz, Przemyslaw
Hi Samta,

This usually means that the vSwitch is not running/has crashed.
Can you check in /opt/stack/logs/ovs-vswitchd.log ? There should be an error 
msg there.

Regards
Przemek

> -Original Message-
> From: Samta Rangare [mailto:samtarang...@gmail.com]
> Sent: Monday, November 9, 2015 1:51 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [networking-ovs-dpdk]
> 
> Hello Everyone,
> 
> I am installing devstack with networking-ovs-dpdk. The local.conf exactly
> looks like the one is available in /opt/stack/networking-ovs-
> dpdk/doc/source/_downloads/local.conf.single_node.
> So I believe all the necessary configuration will be taken care.
> 
> However I am stuck at place where devstack is trying to set external-id ($
> sudo ovs-vsctl br-set-external-id br-ex bridge-id br-ex). As soon as it hits 
> at
> this place it's just hangs forever. I tried commenting this line from
> lib/neutron_plugin/ml2 (I know this is wrong) and then all services came up
> except ovs-dpdk agent and ovs agent.
> 
> BTW I am deploying it in ubuntu 14.04. Any pointer will be really helpful.
> 
> Thanks,
> Samta
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] apt segfaults when too many repositories are configured

2015-11-09 Thread Simon Pasquier
FWIW, we tried to reproduce the bug on fresh environments and failed at it
(in other words, the deployment succeeds). We've also noticed that the
vmware-dvs plugin team has encountered the same bug [0]. If they can't
managed to reproduce the issue either, my guess would be that we faced a
transient problem with the remote package repositories.
BR,
Simon
[0] https://bugs.launchpad.net/fuel-plugins/+bug/1514043

On Fri, Nov 6, 2015 at 11:38 AM, Simon Pasquier 
wrote:

> Hello,
>
> While testing LMA with MOS 7.0, we got apt-get crashing and failing the
> deployment. The details are in the LP bug [0], the TL;DR version is that
> when more repositories are added (hence more packages), there is a risk
> that apt-get commands fail badly when trying to remap memory.
>
> The core issue should be fixed in apt or glibc but in the mean time,
> increasing the APT::Cache-Start value makes the issue go way. This is what
> we're going to do with the LMA plugin but since it's independent of LMA,
> maybe it needs to be addressed at the Fuel level?
>
> BR,
> Simon
>
> [0] https://bugs.launchpad.net/lma-toolchain/+bug/1513539
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-09 Thread Samta Rangare
Thanks for replying Przemyslaw, there is no ovs-vswitchd.log in
/opt/stack/logs/. This is all contains inside (ovsdb-server.pid,
screen).

When I cancel stack .sh (ctr c), and try to rerun this $sudo ovs-vsctl
br-set-external-id br-ex bridge-id br-ex
it didnt hang, that means vSwitch was running isn't it ?

But rerunning stack.sh after unstack hangs again.

Thanks,
Samta

On Mon, Nov 9, 2015 at 7:50 PM, Czesnowicz, Przemyslaw
 wrote:
> Hi Samta,
>
> This usually means that the vSwitch is not running/has crashed.
> Can you check in /opt/stack/logs/ovs-vswitchd.log ? There should be an error 
> msg there.
>
> Regards
> Przemek
>
>> -Original Message-
>> From: Samta Rangare [mailto:samtarang...@gmail.com]
>> Sent: Monday, November 9, 2015 1:51 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [networking-ovs-dpdk]
>>
>> Hello Everyone,
>>
>> I am installing devstack with networking-ovs-dpdk. The local.conf exactly
>> looks like the one is available in /opt/stack/networking-ovs-
>> dpdk/doc/source/_downloads/local.conf.single_node.
>> So I believe all the necessary configuration will be taken care.
>>
>> However I am stuck at place where devstack is trying to set external-id ($
>> sudo ovs-vsctl br-set-external-id br-ex bridge-id br-ex). As soon as it hits 
>> at
>> this place it's just hangs forever. I tried commenting this line from
>> lib/neutron_plugin/ml2 (I know this is wrong) and then all services came up
>> except ovs-dpdk agent and ovs agent.
>>
>> BTW I am deploying it in ubuntu 14.04. Any pointer will be really helpful.
>>
>> Thanks,
>> Samta
>>
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] summarizing the cross-project summit session on Mitaka themes

2015-11-09 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-11-09 13:52:11 +0100:
> Doug Hellmann wrote:
> > One thing I forgot to mention in my original email was the discussion
> > about when we would have this themes conversation for the N cycle.
> > I had originally hoped we would discuss the themes online before
> > the summit, and that those would inform decisions about summit
> > sessions. Several other folks in the room made the point that we
> > were unlikely to come up with a theme so surprising that we would
> > add or drop a summit session from any existing planning, so having
> > the discussion in person at the summit to add background to the
> > other sessions for the week was more constructive.
> 
> So I was stuck in Design Summit 101 during this session and couldn't
> attend. Saying that discussing common themes before summit planning is
> unlikely to change design summit session contents strikes me as odd.
> That's equivalent to saying that the right themes are picked anyway.
> That may be the case for some projects, but certainly not the case for
> all projects, otherwise we wouldn't be having that discussion to begin
> with...
> 
> Personally I think we need to have the cycle themes discussion before
> the design summit so that it can really influence what gets discussed
> there and ends up in real action items. Adding background at the last
> minute to already-decided session topics is just not enough to trigger
> real progress in those key areas.
> 
> Why can't we have that discussion on the ML in the second half of the
> cycle, between the midcycle sprints and the summit planning ?
> 

I think their point was that the themes we were coalescing on had
already been discussed in different forums, and since most of them
were "background" themes (stabilization, functional testing, etc.)
that didn't require much discussion to reach agreement they would
be unlikely to trigger summit sessions. In the future, that might
not be the case, though.

I'll be raising the theme issue several times during milestone
retrospectives, so we may identify new themes out of those discussions.
As we get closer to the end of the cycle, some more explicit time spent
brainstorming themes might make sense, too.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-09 Thread Thierry Carrez
Sean Dague wrote:
> I do wonder what the cause of varying quality is in the distros. I do
> understand that some distros aren't licensing the test suite. But they
> are all building from the same upstream.

Except that they all use significant (and different) patchsets on top of
that "same upstream". Ubuntu for example currently carries 69 patches on
top of OpenJDK 7 source.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?

2015-11-09 Thread Ihar Hrachyshka

Thanks Thomas, much appreciated.

I need to admit that we haven’t heard from SFC folks just yet. I will try  
to raise awareness that we wait for their feedback today on team meeting.  
Adding [sfc] tag to the topic to get more attention.


Ihar

Thomas Morin  wrote:


Hi Ihar,

Ihar Hrachyshka :

Reviving the thread.
[...] (I appreciate if someone checks me on the following though):


This is an excellent recap.


 I set up a new etherpad to collect feedback from subprojects [2].


I've filled in details for networking-bgpvpn.
Please tell me if you need more information.

Once we collect use cases there and agree on agent API for extensions  
(even if per agent type), we will implement it and define as stable API,  
then pass objects that implement the API into extensions thru extension  
manager. If extensions support multiple agent types, they can still  
distinguish between which API to use based on agent type string passed  
into extension manager.


I really hope we start to collect use cases early so that we have time  
to polish agent API and make it part of l2 extensions earlier in Mitaka  
cycle.


We'll be happy to validate the applicability of this approach as soon as  
something is ready.


Thanks for taking up this work!

-Thomas




Ihar Hrachyshka  wrote:


On 30 Sep 2015, at 12:53, Miguel Angel Ajo  wrote:



Ihar Hrachyshka wrote:

On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:

Hi Ihar,

Ihar Hrachyshka :

Miguel Angel Ajo :

Do you have a rough idea of what operations you may need to do?
Right now, what bagpipe driver for networking-bgpvpn needs to  
interact with is:

- int_br OVSBridge (read-only)
- tun_br OVSBridge (add patch port, add flows)
- patch_int_ofport port number (read-only)
- local_vlan_map dict (read-only)
- setup_entry_for_arp_reply method (called to add static ARP  
entries)

Sounds very tightly coupled to OVS agent.
Please bear in mind, the extension interface will be available  
from different agent types
(OVS, SR-IOV, [eventually LB]), so this interface you're talking  
about could also serve as
a translation driver for the agents (where the translation is  
possible), I totally understand
that most extensions are specific agent bound, and we must be  
able to identify

the agent we're serving back exactly.
Yes, I do have this in mind, but what we've identified for now  
seems to be OVS specific.
Indeed it does. Maybe you can try to define the needed pieces in  
high level actions, not internal objects you need to access to.  
Like ‘- connect endpoint X to Y’, ‘determine segmentation id for a  
network’ etc.
I've been thinking about this, but would tend to reach the  
conclusion that the things we need to interact with are pretty hard  
to abstract out into something that would be generic across  
different agents.  Everything we need to do in our case relates to  
how the agents use bridges and represent networks internally:  
linuxbridge has one bridge per Network, while OVS has a limited  
number of bridges playing different roles for all networks with  
internal segmentation.


To look at the two things you  mention:
- "connect endpoint X to Y" : what we need to do is redirect the  
traffic destinated to the gateway of a Neutron network, to the thing  
that will do the MPLS forwarding for the right BGP VPN context  
(called VRF), in our case br-mpls (that could be done with an OVS  
table too) ; that action might be abstracted out to hide the details  
specific to OVS, but I'm not sure on how to  name the destination in  
a way that would be agnostic to these details, and this is not  
really relevant to do until we have a relevant context in which the  
linuxbridge would pass packets to something doing MPLS forwarding  
(OVS is currently the only option we support for MPLS forwarding,  
and it does not really make sense to mix linuxbridge for Neutron  
L2/L3 and OVS for MPLS)
- "determine segmentation id for a network": this is something  
really OVS-agent-specific, the linuxbridge agent uses multiple linux  
bridges, and does not rely on internal segmentation


Completely abstracting out packet forwarding pipelines in OVS and  
linuxbridge agents would possibly allow defining an interface that  
agent extension could use without to know about anything specific to  
OVS or the linuxbridge, but I believe this is a very significant  
taks to tackle.


If you look for a clean way to integrate with reference agents, then  
it’s something that we should try to achieve. I agree it’s not an  
easy thing.


Just an idea: can we have a resource for traffic forwarding, similar  
to security groups? I know folks are not ok with extending security  
groups API due to compatibility reasons, so maybe fwaas is the place  
to experiment with it.


Hopefully it will be acceptable to create an interface, even it  
exposes a set of methods specific to the linuxbridge agent and a set  
of methods 

Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-09 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2015-11-03 14:46:04 -0500:
> As we discussed at the summit, the release management team is
> modifying our change management tracking tools and processes this
> cycle. This email is the official announcement of those changes,
> with more detail than we provided at the summit.
> 
> In past cycles, we have used a combination of Launchpad milestone
> pages and our wiki to track changes in releases. We used to pull
> together release notes for stable point releases at the time of
> release. Most of that work fell to the stable maintenance and release
> teams. Similarly, the release managers worked with PTLs and release
> liaisons at each milestone checkpoint to update Launchpad to
> accurately reflect the work completed at each stage of development.
> It's a lot of work to fix up Launchpad and assemble the notes and
> make sure they are accurate, which has caused us to be a bottleneck
> for clear and complete communication at the time of the release.
> We have been looking for ways to reduce that effort for these tasks
> and eliminate the bottleneck for some time.
> 
> This cycle, to address these problems for our ever-growing set of
> projects, the release management team is introducing a new tool for
> handling release notes as files in-tree, to allow us to simply and
> continuously build the release notes for stable branch point releases
> and milestones on the master branch. The idea is to use small YAML
> files, usually one per note or patch, to avoid merge conflicts on
> backports and then to compile those files in a deterministic way
> into a more readable document for readers. Files containing release
> notes can be including in patches directly, or you can create a
> separate patch with release notes if you want to document a feature
> than spans several patches.  The tool is called Reno, and it currently
> supports ReStructuredText and Sphinx for converting note input files
> to HTML for publication.  Reno is git branch-aware, so we can have
> separate release notes documents for each release series published
> together from the master build.
> 
> The documentation for Reno, including design principles and basic
> usage instructions, is available at [1]. For now we are focusing
> on Sphinx integration so that release notes are published online.
> We will add setuptools integration in a future version of Reno so
> that the release notes can be built with the source distribution.
> 
> As part of this rollout, I will also be updating the settings for
> the gerrit hook script so that when a patch with "Closes-Bug" in
> the commit message is merged the bug will be marked as "Fix Released"
> instead of "Fix Committeed" (since "Fix Committed" is not a closed
> state). When that work is done, I'll send another email to let PTLs
> know they can go through their existing bugs and change their status.
> 
> We are ready to start rolling out Reno for use with Liberty stable
> branch releases and in master for the Mitaka release. We need the
> release liaisons to create and merge a few patches for each project
> between now and the Mitaka-1 milestone.
> 
> 1. We need one patch to the master branch of the project to add the
>instructions for publishing the notes as part of the project
>sphinx documentation build.  An example patch for Glance is in
>[2].
> 
> 2. We need another patch to the stable/liberty branch of the project
>to set up Reno and introduce the first release note for that
>series. An example patch for Glance is in [3].
> 
> 3. Each project needs to turn on the relevant jobs in project-config.
>An example patch using Glance is in [4]. New patches will need
>to be based on the change that adds the necessary template [5],
>until that lands.
> 
> 4. Reno was not ready before the summit, so we started by using the
>wiki for release notes for the initial Liberty releases. We also
>need liaisons to convert those notes to reno YAML files in the
>stable/liberty branch of each project.
> 
> Please use the topic "add-reno" for all patches so we can track
> adoption.
> 
> Once those merge, project teams can stop using Launchpad for tracking
> completed work. We will still use Launchpad for bug reports, for
> now. If a team wants to continue using it for tracking blueprints,
> that's fine.  If a team wants to use Launchpad for scheduling work
> to be done in the future, but not for release tracking, that is
> also fine. The release management team will no longer be reviewing
> or updating Launchpad as part of the release process.
> 
> Thanks,
> Doug
> 
> [1] http://docs.openstack.org/developer/reno/
> [2] https://review.openstack.org/241323
> [3] https://review.openstack.org/241322
> [4] https://review.openstack.org/241344
> [5] https://review.openstack.org/241343
> 

We've had a couple of projects ask about what to do for deliverables
spread across more than one repository. For now, it's only necessary to
add reno to one 

Re: [openstack-dev] [Ceilometer] [Bug#1497073]The return sample body of sample-list is different when use -m and not

2015-11-09 Thread gord chung
again, it's been debated that we shouldn't be allowed to completely drop 
apis.


in regards to deprecation, how do you make this deprecation known? also, 
this goes beyond the client. if we deprecated in the client we should 
really have the endpoint deprecated in API as well.


On 05/11/2015 8:25 PM, liusheng wrote:
I don't think we need two APIs to act duplicated functionalities, the 
"sample-list -m" command actually invoke API "GET 
/V2/meters/", it is more like a meter related API, not 
sample. I personally prefer to mark the "sample-list -m" command 
deprecated and dropped in future cycle. is this reasonable ?


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-09 Thread Doug Hellmann
Excerpts from Andreas Jaeger's message of 2015-11-09 10:11:33 +0100:
> On 11/03/2015 08:46 PM, Doug Hellmann wrote:
> > As we discussed at the summit, the release management team is
> > modifying our change management tracking tools and processes this
> > cycle. This email is the official announcement of those changes,
> > with more detail than we provided at the summit.
> >
> > In past cycles, we have used a combination of Launchpad milestone
> > pages and our wiki to track changes in releases. We used to pull
> > together release notes for stable point releases at the time of
> > release. Most of that work fell to the stable maintenance and release
> > teams. Similarly, the release managers worked with PTLs and release
> > liaisons at each milestone checkpoint to update Launchpad to
> > accurately reflect the work completed at each stage of development.
> > It's a lot of work to fix up Launchpad and assemble the notes and
> > make sure they are accurate, which has caused us to be a bottleneck
> > for clear and complete communication at the time of the release.
> > We have been looking for ways to reduce that effort for these tasks
> > and eliminate the bottleneck for some time.
> >
> > This cycle, to address these problems for our ever-growing set of
> > projects, the release management team is introducing a new tool for
> > handling release notes as files in-tree, to allow us to simply and
> > continuously build the release notes for stable branch point releases
> > and milestones on the master branch. The idea is to use small YAML
> > files, usually one per note or patch, to avoid merge conflicts on
> > backports and then to compile those files in a deterministic way
> > into a more readable document for readers. Files containing release
> > notes can be including in patches directly, or you can create a
> > separate patch with release notes if you want to document a feature
> > than spans several patches.  The tool is called Reno, and it currently
> > supports ReStructuredText and Sphinx for converting note input files
> > to HTML for publication.  Reno is git branch-aware, so we can have
> > separate release notes documents for each release series published
> > together from the master build.
> >
> > The documentation for Reno, including design principles and basic
> > usage instructions, is available at [1]. For now we are focusing
> > on Sphinx integration so that release notes are published online.
> > We will add setuptools integration in a future version of Reno so
> > that the release notes can be built with the source distribution.
> >
> > As part of this rollout, I will also be updating the settings for
> > the gerrit hook script so that when a patch with "Closes-Bug" in
> > the commit message is merged the bug will be marked as "Fix Released"
> > instead of "Fix Committeed" (since "Fix Committed" is not a closed
> > state). When that work is done, I'll send another email to let PTLs
> > know they can go through their existing bugs and change their status.
> >
> > We are ready to start rolling out Reno for use with Liberty stable
> > branch releases and in master for the Mitaka release. We need the
> > release liaisons to create and merge a few patches for each project
> > between now and the Mitaka-1 milestone.
> >
> > 1. We need one patch to the master branch of the project to add the
> > instructions for publishing the notes as part of the project
> > sphinx documentation build.  An example patch for Glance is in
> > [2].
> >
> > 2. We need another patch to the stable/liberty branch of the project
> > to set up Reno and introduce the first release note for that
> > series. An example patch for Glance is in [3].
> >
> > 3. Each project needs to turn on the relevant jobs in project-config.
> > An example patch using Glance is in [4]. New patches will need
> > to be based on the change that adds the necessary template [5],
> > until that lands.
> 
> Currently the job runs on *all* branches. I've proposed 
> https://review.openstack.org/242975 to run it only on Liberty and 
> master. Rereading your comments above, it might be that we only need to 
> run it on master, is that correct?

Yes, thanks for spotting this, we only need it on master.

> 
> Note that currently glance is failing since the job runs on Liberty but 
> your change is not backported: https://review.openstack.org/238358
> 
> Andreas
> 
> > 4. Reno was not ready before the summit, so we started by using the
> > wiki for release notes for the initial Liberty releases. We also
> > need liaisons to convert those notes to reno YAML files in the
> > stable/liberty branch of each project.
> >
> > Please use the topic "add-reno" for all patches so we can track
> > adoption.
> >
> > Once those merge, project teams can stop using Launchpad for tracking
> > completed work. We will still use Launchpad for bug reports, for
> > now. If a team wants to continue using it for tracking blueprints,
> 

Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-09 Thread Mooney, Sean K
Hi
Can you provide some more information regarding your deployment?

Can you check which kernel you are using.

uname -a

If you are using a 3.19 kernel changes to some locking code in the kennel broke 
synchronization dpdk2.0 and requires dpdk 2.1 to be used instead.
In general it is not advisable to use a 3.19 kernel with dpdk as it can lead to 
non-deterministic behavior.

When devstack hangs can you connect with a second ssh session and run 
sudo service ovs-dpdk status
and 
ps aux | grep ovs


When the deployment hangs at sudo ovs-vsctl br-set-external-id br-ex bridge-id 
br-ex
It usually means that the ovs-vswitchd process has exited.

This can happen for a number of reasons.
The vswitchd process may exit if it  failed to allocate memory (due to memory 
fragmentation or lack of free hugepages)
if the ovs-vswitchd.log is not available can you check the the hugepage mount 
point was created in
/mnt/huge And that Iis mounted 
Run 
ls -al /mnt/huge 
and 
mount

then checkout how many hugepages are mounted

cat /proc/meminfo | grep huge


the vswitchd process may also exit if it  failed to initializes dpdk interfaces.
This can happen if no interface is  compatible with the igb-uio or vfio-pci 
drivers
(note in the vfio-pci case all interface in the same iommu group must be bound 
to the vfio-pci driver and
The iommu must be enabled in the kernel command line with VT-d enabled in the 
bios)

Can you  check which interface are bound to the dpdk driver by running the 
following command

/opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py --status


Finally can you confim that ovs-dpdk compiled successfully by either check the 
xstack.log or 
Checking for the BUILD_COMPLETE file in /opt/stack/ovs

Regards
sean




-Original Message-
From: Samta Rangare [mailto:samtarang...@gmail.com] 
Sent: Monday, November 9, 2015 2:31 PM
To: Czesnowicz, Przemyslaw
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Thanks for replying Przemyslaw, there is no ovs-vswitchd.log in 
/opt/stack/logs/. This is all contains inside (ovsdb-server.pid, screen).

When I cancel stack .sh (ctr c), and try to rerun this $sudo ovs-vsctl 
br-set-external-id br-ex bridge-id br-ex it didnt hang, that means vSwitch was 
running isn't it ?

But rerunning stack.sh after unstack hangs again.

Thanks,
Samta

On Mon, Nov 9, 2015 at 7:50 PM, Czesnowicz, Przemyslaw 
 wrote:
> Hi Samta,
>
> This usually means that the vSwitch is not running/has crashed.
> Can you check in /opt/stack/logs/ovs-vswitchd.log ? There should be an error 
> msg there.
>
> Regards
> Przemek
>
>> -Original Message-
>> From: Samta Rangare [mailto:samtarang...@gmail.com]
>> Sent: Monday, November 9, 2015 1:51 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [networking-ovs-dpdk]
>>
>> Hello Everyone,
>>
>> I am installing devstack with networking-ovs-dpdk. The local.conf 
>> exactly looks like the one is available in /opt/stack/networking-ovs- 
>> dpdk/doc/source/_downloads/local.conf.single_node.
>> So I believe all the necessary configuration will be taken care.
>>
>> However I am stuck at place where devstack is trying to set 
>> external-id ($ sudo ovs-vsctl br-set-external-id br-ex bridge-id 
>> br-ex). As soon as it hits at this place it's just hangs forever. I 
>> tried commenting this line from
>> lib/neutron_plugin/ml2 (I know this is wrong) and then all services 
>> came up except ovs-dpdk agent and ovs agent.
>>
>> BTW I am deploying it in ubuntu 14.04. Any pointer will be really helpful.
>>
>> Thanks,
>> Samta
>>
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api]

2015-11-09 Thread Everett Toews
On Nov 9, 2015, at 12:29 AM, Tony Breeds  wrote:
> 
> On Fri, Nov 06, 2015 at 12:30:19PM +, John Garbutt wrote:
> 
>> Ideally, I would like us to fill out that pagination part first.
> 
> It seems the person leading this within the API-WG is AWOL so ...

A couple of patch sets were just sent. Please review!

Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-09 Thread Jay Pipes

Hi Zheng, comments inline...

On 11/09/2015 02:04 AM, Zhenyu Zheng wrote:

Hi All,

Currently, we have strong demands about "rebuilding"(or actions like
rebuilding) volume-backed instances. As in production deployment, volume
backed instance is widely used. Users have the demands of performing the
rebuild(recovery) action


Rebuild definitely does not equal recovery in Nova. It is *not* a 
data-safe operation and was never intended as such. More below...


> for root device while maintain instance UUID

sorts of information, many users also wants to keep the volume uuid
unchanged.


I can kind of see the argument for maintaining volume UUID, instance 
UUID and IP addresses for purposes of not breaking orchestration tooling 
or scripts that refer to the UUID. It's not cloudy, but for "pet" VMs, 
it's a pretty common request.



Nova side doesn't support using Rebuild API directly for volume backed
instances (the volume will not change).


The rebuild API "starts over from scratch" and therefore doesn't make 
sense for volume-backed instances (because there is no starting over 
from scratch since all data on the root volume is saved to the volume).


> And Nova side also doesn't

support detaching root device, that means we cannot performing volume
backup/restore from cinder side, because those actions needs the volume
in "available" status.


Couldn't you just snapshot the instance? That is the recommended way to 
perform a "backup" operation.


Changing the operating system (mentioned in your followup post response 
to Clint) isn't a "restore" operation that I've ever heard of. Are you 
actually referring to a *rescue* API operation here?


http://developer.openstack.org/api-ref-compute-v2.1.html#rescue

Best,
-jay


Now there are couple of patches proposed in nova trying to fix this problem:
[1] https://review.openstack.org/#/c/201458/
[2] https://review.openstack.org/#/c/221732/
[3] https://review.openstack.org/#/c/223887/

[1] and [2] are trying to expose the API of detaching root devices, [3]
is trying to fix it in the current Rebuild API. But yet none of them got
much attention.

As we now have strong demand on performing the "rebuilding" action for
volume-backed instances, and yet there is not any clear information
about  it. I wonder is there any plans of how to support it in Nova and
Cinder?

Yours,

Zheng


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] package dependency installs after recent clean-ups

2015-11-09 Thread Sean Dague
Thanks for getting to the bottom of this, the first patch is approved.
Will review the rest today.

On 11/09/2015 01:59 AM, Ian Wienand wrote:
> devstack maintainers, et al.
> 
> If your CI is failing with missing packages (xml bindings failing to
> build, postgres bindings, etc), it may be due to some of the issues
> covered below.
> 
> I believe some of the recent changes around letting pip build wheels
> and cleaning up some of the package dependencies have revealed that
> devstack is not quite installing build pre-reqs as we thought it was.
> 
> [1] fixes things so we actually install the packages listed in
> "general"
> 
> [2] is a further cleanup of the "devlib" packages, which are no longer
> installed since we removed tools/build_wheels.sh
> 
> I believe a combination of what was removed in [3] and [2] was hiding
> the missing installs from [1].  Thus we can clean up some of the
> dependencies via [4].
> 
> Stacked on that are some less important further clean-ups
> 
> Reviews appreciated, because it seems to have broken some CI, see [5]
> 
> -i
> 
> [1] https://review.openstack.org/242891
> [2] https://review.openstack.org/242894
> [3] https://review.openstack.org/242895
> [4] https://review.openstack.org/242895
> [5] https://review.openstack.org/242536


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-09 Thread Dougal Matthews
On 9 November 2015 at 12:44, Dmitry Tantsur  wrote:

> Hi OOO'ers, hopefully the subject caught your attentions :)
>
> Currently, tripleoclient exposes several commands in "openstack baremetal"
> and "openstack baremetal introspection" namespaces belonging to ironic and
> ironic-inspector accordingly. TL;DR of this email is to deprecate them and
> move to TripleO-specific namespaces. Read on to know why.
>
> Problem
> ===
>
> I realized that we're doing a wrong thing when people started asking me
> why "baremetal introspection start" and "baremetal introspection bulk
> start" behave so differently (the former is from ironic-inspector, the
> latter is from tripleoclient). The problem with TripleO commands is that
> they're highly opinionated workflows commands, but there's no way a user
> can distinguish them from general-purpose ironic/ironic-inspector commands.
> The way some of them work is not generic enough ("baremetal import"), or
> uses different defaults from an upstream project ("configure boot"), or
> does something completely unacceptable upstream (e.g. the way
> "introspection bulk start" deals with node states).
>

A big +1 to the idea.

We originally done this because we wanted to make it feel more
"integrated", but it never quite worked. I completely agree with all the
justifications below.


So, here are commands that tripleoclient exposes with my comments:
>
> 1. baremetal instackenv validate
>
>  This command assumes there's an "baremetal instackenv" object, while
> instackenv is a tripleo-specific file format.
>
> 2. baremetal import
>
>  This command supports a limited subset of ironic drivers and driver
> properties, only those known to os-cloud-config.
>
> 3. baremetal introspection bulk start
>
>  This command does several bad (IMO) things:
>  a. Messes with ironic node states
>  b. Operates implicitly on all nodes (in a wrong state)
>  c. Defaults to polling
>
> 4. baremetal show capabilities
>
>  This is the only commands that is generic enough and could actually make
> it to ironicclient itself.
>
> 5. baremetal introspection bulk status
>
>  See "bulk start" above.
>
> 6. baremetal configure ready state
>
>  First of all, this and the next command use "baremetal configure" prefix.
> I would not promise we'll never start using it in ironic, breaking the
> whole TripleO.
>
>  Seconds, it's actually DELL-specific.
>

heh, that I didn't know!


>
> 7. baremetal configure boot
>
>  This one is nearly ok, but it defaults to local boot, which is not an
> upstream default. Default values for images may not work outside of TripleO
> as well.
>
> Proposal
> 
>
> As we already have "openstack undercloud" and "openstack overcloud"
> prefixes for TripleO, I suggest we move these commands under "openstack
> overcloud nodes" namespace. So we end up with:
>
>  overcloud nodes import
>  overcloud nodes configure ready state --drac
>  overcloud nodes configure boot
>

I think this is probably okay, but I wonder if "nodes" is a bit generic?
Why not "overcloud baremetal" for consistency?



> As you see, I require an explicit --drac argument for "ready state"
> command. As to the remaining commands:
>
> 1. baremetal introspection status --all
>
>   This is fine to move to inspector-client, as inspector knows which nodes
> are/were on introspection. We'll need a new API though.
>

A new API endpoint in Ironic Inspector?


2. baremetal show capabilities
>
>   We'll have this or similar command in ironic, hopefully this cycle.
>
> 3. overcloud nodes introspect --poll --allow-available
>
>   I believe that we need to make 2 things explicit in this replacement for
> "introspection bulk status": polling and operating on "available" nodes.
>
> 4. overcloud nodes import --dry-run
>
>   could be a replacement for "baremetal instackenv validate".
>
>
> Please let me know what you think.
>

Thanks for bringing this up, it should make everything much clearer for
everyone.



>
> Cheers,
> Dmitry.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting - 11/09/2015

2015-11-09 Thread Renat Akhmerov
Hi,

This is a reminder that we’ll have a team meeting today at #openstack-meeting 
at 16.00 UTC.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
Scoping Mitaka and Mitaka-1
Open discussion

Feel free to add your topics at 
https://wiki.openstack.org/wiki/Meetings/MistralAgenda#Agenda 
.

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-11-09 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nominations open for the N and O names of OpenStack

2015-11-09 Thread Monty Taylor

Hey everybody!

It's release naming time, and this time we get to do two at once!

If you'd like to propose a name, there are two wiki pages:

For the N release, where the geographic region is "Texas Hill Country":

https://wiki.openstack.org/wiki/Release_Naming/N_Proposals

For the O release, where the geographic region is "Catalonia":

https://wiki.openstack.org/wiki/Release_Naming/O_Proposals

We're going to keep proposals open until 2015-11-15 23:59:59 UTC, and 
voting will start 2015-11-30. The time in between proposals closing and 
voting starting is to allow for a TC meeting to approve any exceptional 
names that people propose, and then to not attempt to have a vote 
spanning the US Thanksgiving holiday.


Have fun!

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-09 Thread Sean Dague
On 11/09/2015 06:05 AM, Thierry Carrez wrote:

> So that is an important point. While there is "the Oracle JVM", there is
> nothing like "OpenJDK". There are a number of OpenJDK builds by various
> distros and they are all different (and of varying quality). The beast
> is brittle, as anyone who has ever run the TCK on OpenJDK should be able
> to tell you. The reason a lot of Java bugtrackers still start by asking
> you to "reproduce on Oracle's JVM" is to eliminate that unknown, not
> because OpenJDK is always bad.
> 
> My main objection about picking a Java solution was that we'd in effect
> force our users into a non-free solution so that they eliminate that
> unknown themselves. I guess as long as we are reasonably confident that
> ZooKeeper behaves well with most OpenJDK implementations, and that there
> are solid, well-known free software deployment options available, we
> should be fine ?

I think that's where declaring this fact early is good. Hey distros,
this is going to need to work out of the box. Seems pretty reasonable
and a heads up that is going to ensure things need to function well down
the road.

I do wonder what the cause of varying quality is in the distros. I do
understand that some distros aren't licensing the test suite. But they
are all building from the same upstream.

I want to be really specific before we as a community spread FUD around
an effort like openjdk. Because it doesn't help us make decisions long
term if we're basing that on long standing biases that may or may not
still be supported by data.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-09 Thread Sean Dague
On 11/09/2015 05:30 AM, Hugh Blemings wrote:
> Hiya,
> 
> On 7/11/2015 06:42, Sean Dague wrote:
>> On 11/06/2015 01:15 AM, Tony Breeds wrote:
>>> Hello all,
>>>
>>> I'll start by acknowledging that this is a big and complex issue and I
>>> do not claim to be across all the view points, nor do I claim to be
>>> particularly persuasive ;P
>>>
>>> Having stated that, I'd like to seek constructive feedback on the
>>> idea of
>>> keeping Juno around for a little longer.  During the summit I spoke to a
>>> number of operators, vendors and developers on this topic.  There was
>>> some
>>> support and some "That's crazy pants!" responses.  I clearly didn't
>>> make it
>>> around to everyone, hence this email.
>>>
>>> Acknowledging my affiliation/bias:  I work for Rackspace in the private
>>> cloud team.  We support a number of customers currently running Juno
>>> that are,
>>> for a variety of reasons, challenged by the Kilo upgrade.
>>
>> The upstream strategy has been make upgrades unexciting, and then folks
>> can move forward easily.
>>
>> I would really like to unpack what those various reasons are that people
>> are trapped. Because figuring out why they feel that way is important
>> data in what needs to be done better on upgrade support and testing.
> 
> In reading this thread and Sean's post, I wonder out loud if we're
> seeing something somewhat new to OpenStack here, but perhaps not to
> other FOSS projects.
> 
> Specifically does Kilo happen to mark a point where a much larger number
> of end users have adopted OpenStack and so we're starting to see a much
> greater number of visible and mainstream users facing the "upgrade
> difficulty question" ?
> 
> If Juno us the point where we suddenly got an order of magnitude more
> deployments, then some point later you'll see an order of magnitude more
> end users struggling with how/when to upgrade.
> 
> Really wish I could articulate this better, but perhaps the point can be
> distilled from the ramble...

Honestly, I think that every release has seen a larger number of new
installations than the release before it. The stable EOL thread is
nothing new. The promise that people will show up is nothing new. The
lack of anyone else showing up to help maintain stable is nothing new.

I do think we need to focus on the snags though. Very few upstreams
maintain LTS releases. A big piece of that is it makes upgrades harder.
It means a ton of changes are being inflicted on you all at once.
Especially if you want to get to the point of live upgrading an
installation using live migration to create 0 downtime environments.
Which means that you've got to be able to live upgrade between the
versions of libvirt / ovs across that time frame.

So lets figure out where the snags are. I'm pretty uninterested in
threads that just scream LTS without a list of upgrade bugs that have
been filed to describe why rapid upgrade isn't the right long term
solution.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] RFC: profile matching

2015-11-09 Thread Dmitry Tantsur

Hi folks!

I spent some time thinking about bringing profile matching back in, so 
I'd like to get your comments on the following near-future plan.


First, the scope of the problem. What we do is essentially kind of 
capability discovery. We'll help nova scheduler with doing the right 
thing by assigning a capability like "suits for compute", "suits for 
controller", etc. The most obvious path is to use inspector to assign 
capabilities like "profile=1" and then filter nodes by it.


A special care, however, is needed when some of the nodes match 2 or 
more profiles. E.g. if we have all 4 nodes matching "compute" and then 
only 1 matching "controller", nova can select this one node for 
"compute" flavor, and then complain that it does not have enough hosts 
for "controller".


We also want to conduct some sanity check before even calling to 
heat/nova to avoid cryptic "no valid host found" errors.


(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to 
inspector that allow us to define rules on introspection data. The plan 
is to have rules saying, for example:


 rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
 rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based 
DSL [1].


As you see, one node can receive 0, 1 or many such capabilities. So we 
need the next step to make a final decision, based on how many nodes we 
need of every profile.


(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided, 
tripleoclient will fetch all ironic nodes, and try to ensure that we 
have enough nodes with all profiles.


Nodes with existing "profile:xxx" capability are left as they are. For 
nodes without a profile it will look at "xxx_profile" capabilities 
discovered on the previous step. One of the possible profiles will be 
chosen and assigned to "profile" capability. The assignment stops as 
soon as we have enough nodes of a flavor as requested by a user.


(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will 
fetch all flavors involved and look at the "profile" capabilities. If 
they are set for any flavors, it will check if we have enough ironic 
nodes with a given "profile:xxx" capability. This check will happen 
after profiles assigning, if --assign-profiles is used.


Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-09 Thread Jay Pipes

On 11/03/2015 12:19 PM, Ihar Hrachyshka wrote:

Now, I don't think that we need two APIs for the same thing. I would
be glad if we instead converge on a single API, making sure all cases
are covered. In the end, the feature is just a building block for
other features, like fwaas, security groups, or QoS.

We could build traffic classifier use cases on top of service groups,
though the name of the latter is a bit specific, and could use some
more generalization to cover other cases where we need to classify
traffic that may belong to different services; or vice versa, may
split into several categories, even while having a single service source.

I encourage those who work on traffic classifier, and those who will
implement and review service group feature, to start discussion on how
we converge and avoid multiple APIs for similar things.


+1000

Please do not create 3 APIs that essentially do the exact same thing. I 
think Sean's on the right track with the modeling he's been doing in his 
PoC library.


I think the classifier spec from Yuji Azama is better than the 
networking-sfc service function chaining API proposal because, well, it 
actually describes an API instead of a series of CLI commands.


The problem with the classifier proposal, though, is that it flattens 
the API into just the rules part, discarding the grouping part that 
allows one to define groups of rules. The service-group[-rules] spec 
gets this part correct.


At this point, I'd recommend just enhancing the existing security-group 
and security-group-rules APIs, though, since having both "service-group" 
and "security-group" in the API referring to similar concepts will be 
quite confusing IMHO. But then, Sean has told me he doesn't want to 
change the original AWS security group concept in the Neutron API... I'm 
on the fence about whether that would really be problematic if the 
security-group[-rules] API is enhanced/evolved appropriately.


In short, my preference, in order, would be:

1) Enhance/evolve the existing security-groups and security-group-rules 
API in Neutron to support more generic classification of traffic from L2 
to L7, using mostly the modeling that Sean has put together in his PoC 
library.


2) Keep the security-group API as-is to keep outward compatibility with 
AWS. Create a single, new service-groups and service-group-rules API for 
L2 to L7 traffic classification using mostly the modeling that Sean has 
put together. Remove the networking-sfc repo and obselete the classifier 
spec. Not sure what should/would happen to the FWaaS API, frankly.


Best,
-jay



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] image.exists events

2015-11-09 Thread Cristian Tomoiaga
Hello,

I recently created a script to generate image.exists events similar to what
is described here:
https://blueprints.launchpad.net/glance/+spec/glance-exists-notification

I am wondering if it's a good idea to finish the implementation described
in that spec ?

image.exists events are useful especially for resource accounting/billing
(what images were active for a tenant 7 months ago for example).

(see nova instance.exists events or cinder, neutron and probably other
projects)

-- 
Cristian Tomoiaga
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API sub-team meeting

2015-11-09 Thread Alex Xu
2015-11-09 21:49 GMT+08:00 Ed Leafe :

> On Nov 9, 2015, at 7:11 AM, Alex Xu  wrote:
> >
> > We have weekly Nova API meeting this week. The meeting is being held
> Tuesday UTC1200.
> >
> > In other timezones the meeting is at:
> >
> > EST 08:00 (Tue)
>
> Just to clarify: that's EDT 07:00, since daylight savings ended in the US
> last week.
>

Thanks! Sorry I always forget the daylight savings change.


>
> > Japan 21:00 (Tue)
> > China 20:00 (Tue)
> > United Kingdom 13:00 (Tue)
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Let's stop hijacking other projects' OSC namespaces

2015-11-09 Thread Dmitry Tantsur

On 11/09/2015 03:04 PM, Dougal Matthews wrote:

On 9 November 2015 at 12:44, Dmitry Tantsur > wrote:

Hi OOO'ers, hopefully the subject caught your attentions :)

Currently, tripleoclient exposes several commands in "openstack
baremetal" and "openstack baremetal introspection" namespaces
belonging to ironic and ironic-inspector accordingly. TL;DR of this
email is to deprecate them and move to TripleO-specific namespaces.
Read on to know why.

Problem
===

I realized that we're doing a wrong thing when people started asking
me why "baremetal introspection start" and "baremetal introspection
bulk start" behave so differently (the former is from
ironic-inspector, the latter is from tripleoclient). The problem
with TripleO commands is that they're highly opinionated workflows
commands, but there's no way a user can distinguish them from
general-purpose ironic/ironic-inspector commands. The way some of
them work is not generic enough ("baremetal import"), or uses
different defaults from an upstream project ("configure boot"), or
does something completely unacceptable upstream (e.g. the way
"introspection bulk start" deals with node states).


A big +1 to the idea.

We originally done this because we wanted to make it feel more
"integrated", but it never quite worked. I completely agree with all the
justifications below.


So, here are commands that tripleoclient exposes with my comments:

1. baremetal instackenv validate

  This command assumes there's an "baremetal instackenv" object,
while instackenv is a tripleo-specific file format.

2. baremetal import

  This command supports a limited subset of ironic drivers and
driver properties, only those known to os-cloud-config.

3. baremetal introspection bulk start

  This command does several bad (IMO) things:
  a. Messes with ironic node states
  b. Operates implicitly on all nodes (in a wrong state)
  c. Defaults to polling

4. baremetal show capabilities

  This is the only commands that is generic enough and could
actually make it to ironicclient itself.

5. baremetal introspection bulk status

  See "bulk start" above.

6. baremetal configure ready state

  First of all, this and the next command use "baremetal configure"
prefix. I would not promise we'll never start using it in ironic,
breaking the whole TripleO.

  Seconds, it's actually DELL-specific.


heh, that I didn't know!


7. baremetal configure boot

  This one is nearly ok, but it defaults to local boot, which is not
an upstream default. Default values for images may not work outside
of TripleO as well.

Proposal


As we already have "openstack undercloud" and "openstack overcloud"
prefixes for TripleO, I suggest we move these commands under
"openstack overcloud nodes" namespace. So we end up with:

  overcloud nodes import
  overcloud nodes configure ready state --drac
  overcloud nodes configure boot


I think this is probably okay, but I wonder if "nodes" is a bit generic?
Why not "overcloud baremetal" for consistency?


I don't have a strong opinion on it :)




As you see, I require an explicit --drac argument for "ready state"
command. As to the remaining commands:

1. baremetal introspection status --all

   This is fine to move to inspector-client, as inspector knows
which nodes are/were on introspection. We'll need a new API though.


A new API endpoint in Ironic Inspector?


Yeah, a new endpoint to report all nodes that are/were on inspection.




2. baremetal show capabilities

   We'll have this or similar command in ironic, hopefully this cycle.

3. overcloud nodes introspect --poll --allow-available

   I believe that we need to make 2 things explicit in this
replacement for "introspection bulk status": polling and operating
on "available" nodes.

4. overcloud nodes import --dry-run

   could be a replacement for "baremetal instackenv validate".


Please let me know what you think.


Thanks for bringing this up, it should make everything much clearer for
everyone.


Great! I've also added this topic to the tomorrow's meeting to increase 
visibility.





Cheers,
Dmitry.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-09 Thread Adam Young

On 11/06/2015 06:28 PM, Tim Hinrichs wrote:
Congress allows users to write a policy that executes an action under 
certain conditions.


The conditions can be based on any data Congress has access to, which 
includes nova servers, neutron networks, cinder storage, keystone 
users, etc.  We also have some Ceilometer statistics; I'm not sure 
about whether it's easy to get the Keystone notifications that you're 
talking about today, but notifications are on our roadmap.  If the 
user's login is reflected in the Keystone API, we may already be 
getting that event.


The action could in theory be a mistral/heat API or an arbitrary 
script.  Right now we're set up to invoke any method on any of the 
python-clients we've integrated with. We've got an integration with 
heat but not mistral.  New integrations are typically easy.


Sounds like Mistral and Congress are competing here, then.  Maybe we 
should merge those efforts.




Happy to talk more.

Tim



On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann > wrote:


Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
> On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann
> wrote:
>
> > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41
-0800:
> > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12
-0500:
> > > > > Can people help me work through the right set of tools
for this use
> > case
> > > > > (has come up from several Operators) and map out a plan
to implement
> > it:
> > > > >
> > > > > Large cloud with many users coming from multiple
Federation sources
> > has
> > > > > a policy of providing a minimal setup for each user upon
first visit
> > to
> > > > > the cloud:  Create a project for the user with a minimal
quota, and
> > > > > provide them a role assignment.
> > > > >
> > > > > Here are the gaps, as I see it:
> > > > >
> > > > > 1.  Keystone provides a notification that a user has
logged in, but
> > > > > there is nothing capable of executing on this
notification at the
> > > > > moment.  Only Ceilometer listens to Keystone notifications.
> > > > >
> > > > > 2.  Keystone does not have a workflow engine, and should
not be
> > > > > auto-creating projects.  This is something that should
be performed
> > via
> > > > > a Heat template, and Keystone does not know about Heat,
nor should
> > it.
> > > > >
> > > > > 3.  The Mapping code is pretty static; it assumes a user
entry or a
> > > > > group entry in identity when creating a role assignment,
and neither
> > > > > will exist.
> > > > >
> > > > > We can assume a special domain for Federated users to
have per-user
> > > > > projects.
> > > > >
> > > > > So; lets assume a Heat Template that does the following:
> > > > >
> > > > > 1. Creates a user in the per-user-projects domain
> > > > > 2. Assigns a role to the Federated user in that project
> > > > > 3. Sets the minimal quota for the user
> > > > > 4. Somehow notifies the user that the project has been
set up.
> > > > >
> > > > > This last probably assumes an email address from the
Federated
> > > > > assertion.  Otherwise, the user hits Horizon, gets a "not
> > authenticated
> > > > > for any projects" error, and is stumped.
> > > > >
> > > > > How is quota assignment done in the other projects now? 
What happens

> > > > > when a project is created in Keystone?  Does that
information gets
> > > > > transferred to the other services, and, if so, how?  Do
most people
> > use
> > > > > a custom provisioning tool for this workflow?
> > > > >
> > > >
> > > > I know at Dreamhost we built some custom integration that
was triggered
> > > > when someone turned on the Dreamcompute service in their
account in our
> > > > existing user management system. That integration created
the account
> > in
> > > > keystone, set up a default network in neutron, etc. I've
long thought
> > we
> > > > needed a "new tenant creation" service of some sort, that
sits outside
> > > > of our existing services and pokes them to do something
when a new
> > > > tenant is established. Using heat as the implementation
makes sense,
> > for
> > > > things that heat can control, but we don't want keystone
to depend on
> > > > heat and we don't want to bake such a specialized feature
into heat
> > > > itself.
> > > >
> > >
> > > I agree, an automation piece that is built-in and easy to add to
> > > OpenStack would be great.
> > >
> > > I do not agree that it should be Heat. 

Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-09 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2015-11-03 14:46:04 -0500:
> As we discussed at the summit, the release management team is
> modifying our change management tracking tools and processes this
> cycle. This email is the official announcement of those changes,
> with more detail than we provided at the summit.
> 
> In past cycles, we have used a combination of Launchpad milestone
> pages and our wiki to track changes in releases. We used to pull
> together release notes for stable point releases at the time of
> release. Most of that work fell to the stable maintenance and release
> teams. Similarly, the release managers worked with PTLs and release
> liaisons at each milestone checkpoint to update Launchpad to
> accurately reflect the work completed at each stage of development.
> It's a lot of work to fix up Launchpad and assemble the notes and
> make sure they are accurate, which has caused us to be a bottleneck
> for clear and complete communication at the time of the release.
> We have been looking for ways to reduce that effort for these tasks
> and eliminate the bottleneck for some time.
> 
> This cycle, to address these problems for our ever-growing set of
> projects, the release management team is introducing a new tool for
> handling release notes as files in-tree, to allow us to simply and
> continuously build the release notes for stable branch point releases
> and milestones on the master branch. The idea is to use small YAML
> files, usually one per note or patch, to avoid merge conflicts on
> backports and then to compile those files in a deterministic way
> into a more readable document for readers. Files containing release
> notes can be including in patches directly, or you can create a
> separate patch with release notes if you want to document a feature
> than spans several patches.  The tool is called Reno, and it currently
> supports ReStructuredText and Sphinx for converting note input files
> to HTML for publication.  Reno is git branch-aware, so we can have
> separate release notes documents for each release series published
> together from the master build.
> 
> The documentation for Reno, including design principles and basic
> usage instructions, is available at [1]. For now we are focusing
> on Sphinx integration so that release notes are published online.
> We will add setuptools integration in a future version of Reno so
> that the release notes can be built with the source distribution.
> 
> As part of this rollout, I will also be updating the settings for
> the gerrit hook script so that when a patch with "Closes-Bug" in
> the commit message is merged the bug will be marked as "Fix Released"
> instead of "Fix Committeed" (since "Fix Committed" is not a closed
> state). When that work is done, I'll send another email to let PTLs
> know they can go through their existing bugs and change their status.
> 
> We are ready to start rolling out Reno for use with Liberty stable
> branch releases and in master for the Mitaka release. We need the
> release liaisons to create and merge a few patches for each project
> between now and the Mitaka-1 milestone.
> 
> 1. We need one patch to the master branch of the project to add the
>instructions for publishing the notes as part of the project
>sphinx documentation build.  An example patch for Glance is in
>[2].
> 
> 2. We need another patch to the stable/liberty branch of the project
>to set up Reno and introduce the first release note for that
>series. An example patch for Glance is in [3].
> 
> 3. Each project needs to turn on the relevant jobs in project-config.
>An example patch using Glance is in [4]. New patches will need
>to be based on the change that adds the necessary template [5],
>until that lands.
> 
> 4. Reno was not ready before the summit, so we started by using the
>wiki for release notes for the initial Liberty releases. We also
>need liaisons to convert those notes to reno YAML files in the
>stable/liberty branch of each project.
> 
> Please use the topic "add-reno" for all patches so we can track
> adoption.
> 
> Once those merge, project teams can stop using Launchpad for tracking
> completed work. We will still use Launchpad for bug reports, for
> now. If a team wants to continue using it for tracking blueprints,
> that's fine.  If a team wants to use Launchpad for scheduling work
> to be done in the future, but not for release tracking, that is
> also fine. The release management team will no longer be reviewing
> or updating Launchpad as part of the release process.
> 
> Thanks,
> Doug
> 
> [1] http://docs.openstack.org/developer/reno/
> [2] https://review.openstack.org/241323
> [3] https://review.openstack.org/241322
> [4] https://review.openstack.org/241344
> [5] https://review.openstack.org/241343
> 

fungi and AJaeger identified a hole in the existing setup that might
allow bad changes on stable branches to break master. We're working on
fixing that right now. 

Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-09 Thread Matthew Treinish
On Mon, Nov 09, 2015 at 05:41:53PM +0100, Thierry Carrez wrote:
> Hi everyone,
> 
> A few cycles ago we set up the Release Cycle Management team which was a
> bit of a frankenteam of the things I happened to be leading: release
> management, stable branch maintenance and vulnerability management.
> While you could argue that there was some overlap between those
> functions (as in, "all these things need to be released") logic was not
> the primary reason they were put together.
> 
> When the Security Team was created, the VMT was spinned out of the
> Release Cycle Management team and joined there. Now I think we should
> spin out stable branch maintenance as well:
> 
> * A good chunk of the stable team work used to be stable point release
> management, but as of stable/liberty this is now done by the release
> management team and triggered by the project-specific stable maintenance
> teams, so there is no more overlap in tooling used there
> 
> * Following the kilo reform, the stable team is now focused on defining
> and enforcing a common stable branch policy[1], rather than approving
> every patch. Being more visible and having more dedicated members can
> only help in that very specific mission
> 
> * The release team is now headed by Doug Hellmann, who is focused on
> release management and does not have the history I had with stable
> branch policy. So it might be the right moment to refocus release
> management solely on release management and get the stable team its own
> leadership
> 
> * Empowering that team to make its own decisions, giving it more
> visibility and recognition will hopefully lead to more resources being
> dedicated to it
> 
> * If the team expands, it could finally own stable branch health and
> gate fixing. If that ends up all falling under the same roof, that team
> could make decisions on support timeframes as well, since it will be the
> primary resource to make that work
> 
> So.. good idea ? bad idea ? What do current stable-maint-core[2] members
> think of that ? Who thinks they could step up to lead that team ?

So I don't see the point, do we really think this needs to be a thing? Most of
the backports and reviews are done by project specific teams. What are you
actually proposing the team produce? With the point releases going away the only
actual thing the team is responsible before disappears. Right now most of [1] is
inactive when it comes to do day to day stable branch activities.

It seems to me with besides a few of us looking at gate issues when we have to
backport something and find nothing working the only thing [1] does is respond
to requests to add people to project specific stable core teams. (which is
something I've argued against in the past) I don't think adding a separate team
to governance for this really makes sense.

That being said I can understand your point of view that Doug doesn't want to
worry about stable, something I can't really blame him for because apparently
neither do most people (including myself on most days).


> [1] http://docs.openstack.org/project-team-guide/stable-branches.html
> [2] https://review.openstack.org/#/admin/groups/530,members
> 


-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-09 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-11-09 17:41:53 +0100:
> Hi everyone,
> 
> A few cycles ago we set up the Release Cycle Management team which was a
> bit of a frankenteam of the things I happened to be leading: release
> management, stable branch maintenance and vulnerability management.
> While you could argue that there was some overlap between those
> functions (as in, "all these things need to be released") logic was not
> the primary reason they were put together.
> 
> When the Security Team was created, the VMT was spinned out of the
> Release Cycle Management team and joined there. Now I think we should
> spin out stable branch maintenance as well:
> 
> * A good chunk of the stable team work used to be stable point release
> management, but as of stable/liberty this is now done by the release
> management team and triggered by the project-specific stable maintenance
> teams, so there is no more overlap in tooling used there
> 
> * Following the kilo reform, the stable team is now focused on defining
> and enforcing a common stable branch policy[1], rather than approving
> every patch. Being more visible and having more dedicated members can
> only help in that very specific mission
> 
> * The release team is now headed by Doug Hellmann, who is focused on
> release management and does not have the history I had with stable
> branch policy. So it might be the right moment to refocus release
> management solely on release management and get the stable team its own
> leadership
> 
> * Empowering that team to make its own decisions, giving it more
> visibility and recognition will hopefully lead to more resources being
> dedicated to it
> 
> * If the team expands, it could finally own stable branch health and
> gate fixing. If that ends up all falling under the same roof, that team
> could make decisions on support timeframes as well, since it will be the
> primary resource to make that work
> 
> So.. good idea ? bad idea ? What do current stable-maint-core[2] members
> think of that ? Who thinks they could step up to lead that team ?
> 
> [1] http://docs.openstack.org/project-team-guide/stable-branches.html
> [2] https://review.openstack.org/#/admin/groups/530,members
> 

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] live migration sub-team meeting

2015-11-09 Thread Michael Still
So, its been a week. What time are we picking?

Michael

On Thu, Nov 5, 2015 at 10:46 PM, Murray, Paul (HP Cloud) 
wrote:

> > > Most team members expressed they would like a regular IRC meeting for
> > > tracking work and raising blocking issues. Looking at the contributors
> > > here [2], most of the participants seem to be in the European
> > > continent (in time zones ranging from UTC to UTC+3) with a few in the
> > > US (please correct me if I am wrong). That suggests that a time around
> > > 1500 UTC makes sense.
> > >
> > > I would like to invite suggestions for a day and time for a weekly
> > > meeting -
> >
> > Maybe you could create a quick Doodle poll to reach a rough consensus on
> > day/time:
> >
> > http://doodle.com/
>
> Yes, of course, here's the poll:
>
> http://doodle.com/poll/rbta6n3qsrzcqfbn
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stackalytics] possibly breaking bug closing statistics?

2015-11-09 Thread Doug Hellmann
Stackalytics experts,

When we roll out the launchpad process changes described in my earlier
email [1], we will no longer be targeting bugs as part of closing them.

Thierry raised a concern that this might break the way stackalytics
calculates closed bug statistics for a cycle. Looking at the code
in [2] and [3], I see it querying by date range and looking at the
status, but not looking at the milestone to which the bug is targeted.

Am I right in believing that we won't have any change in our
statistics gathering if we go ahead with the current plan?

Thanks,
Doug

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html 
[2] 
http://git.openstack.org/cgit/openstack/stackalytics/tree/stackalytics/processor/main.py#n99
[3] 
http://git.openstack.org/cgit/openstack/stackalytics/tree/stackalytics/processor/record_processor.py#n511

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Report from Gerrit User Summit

2015-11-09 Thread James E. Blair
Khai Do and I attended the Gerrit User Summit this weekend.  It was a
very busy weekend indeed with quite a lot of activity in all areas
related to Gerrit.  The following is a brief summary of items which
may be of interest to our community.

--==--

David Pursehouse from Sony Mobile spoke about what's new in 2.11 and
coming in 2.12:

* Inline editor.  There will be an inline editor so that edits to
  commits can be made through the web interface.

* Better file sorting for C/C++ (headers listed before source).  We
  may want to look into the mechanism used for this to see if it can
  be applied to other situations, such as listing test files before or
  after implementation.

* Submit changes with ancestors.  When the tip of a branch is
  submitted and all of its ancestors are submittable, they will all be
  submitted together.  If ancestors are not submittable, the change at
  the tip will not be submittable.  This removes the submit queue and
  the "Submitted, merge pending" state from Gerrit and generally makes
  submission an immediate atomic operation.  This is an improvement
  that will make several edge cases we encounter simpler, however,
  because Zuul handles submission and does so using strict ordering,
  this change will mostly be invisible in the OpenStack environment.

* Submit whole topic.  This is an implementation of the much-requested
  feature to merge changes to multiple repositories simultaneously.

  This uses the "topic" field to designate changes that should be
  merged simultaneously.  When this feature is enabled, the only
  "submit" option for a change which shares a topic with other changes
  will be "Submit whole topic".  There is a panel on the change screen
  that indicates which changes will be submitted together with the
  current one.  There was some discussion that this limits options --
  a user may not want to submit all changes on a topic at once, and
  unrelated changes may inadvertently end up sharing a topic,
  especially in busy systems or if a poor topic name is chosen (eg
  "test"), and that it formalizes one particular use of the "topic"
  field which to this point has been more free-form.  The authors are
  interested in getting an early version of this out for feedback, but
  acknowledge they have not considered all use-cases yet and may need
  to revise it.

  In the OpenStack community, we have decided to avoid pursuing this
  kind of feature because the alternative -- strict sequencing of
  co-dependent changes -- provides for better upgrade code and is more
  friendly to continuous deployers.  This feature can be disabled, so
  I anticipate we would do so and we would notice no substantial
  changes because of this.  Of course, if we want to revisit our
  decision, we could do so.

* Option to require all commits pushed be GPG signed.

* Search by author or committer.  Also, search by comment author.

* As noted in another recent thread by Khai, the hashtags support
  (user-defined tags applied to changes) exists but depends on notedb
  which is not ready for use yet (targeted for 3.0 which is probably
  at least 6 months off).

--==--

Shane McIntosh from McGill University presented an overview of his
research into the efficacy of code review.  The data he studied
include several open source projects including OpenStack.  His
papers[1] are online, but some quick highlights from his research:

* Modules with a high percentage of review-focused developers are less
  likely to be defective.
* There is a sweet spot around two reviewers, where more reviewers are
  less likely to find more defects.

And some tidbits from other researchers:

* Older code is less defect prone (Graves, et al, TSE 2000)
* Code with weak ownership is more defect prone (Bird, et al,
  ESEC/FSE 2011)

[1] http://shanemcintosh.org/tags/code-review.html

--==--

There were a litany of presentations about Gerrit installations.  I
believe we may be one of the larger public Gerrit users, but we are
not even remotely near the large end of the scale when private
installations are considered.  Very large installations can be run on
a single large instance.  Many users are able to use a master-slave
configuration to spread load.  Perhaps only Google is running a
multi-master system, though they utilize secret Google-only
technology.  It is possible, likely even with open-source components,
but would require substantial customized code.  It is likely that the
notedb work in Gerrit 3.0 will simplify this.

--==--

I gave a short presentation on Gertty.  The authors of the Gerrit REST
API were happy to see that it could support something like Gertty.

--==--

Johannes Nicolai of CollabNet presented a framework for tuning Gerrit
parameters, and produced a handout[2] as a guideline.

It was noted that the Gerrit documentation recommends disabling
connection pooling with MySQL.  This is apparently because of bad
experiences with the MySQL server dropping idle connections.  Since we
have addressed 

Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-09 Thread Matt Riedemann



On 11/9/2015 10:41 AM, Thierry Carrez wrote:

Hi everyone,

A few cycles ago we set up the Release Cycle Management team which was a
bit of a frankenteam of the things I happened to be leading: release
management, stable branch maintenance and vulnerability management.
While you could argue that there was some overlap between those
functions (as in, "all these things need to be released") logic was not
the primary reason they were put together.

When the Security Team was created, the VMT was spinned out of the
Release Cycle Management team and joined there. Now I think we should
spin out stable branch maintenance as well:

* A good chunk of the stable team work used to be stable point release
management, but as of stable/liberty this is now done by the release
management team and triggered by the project-specific stable maintenance
teams, so there is no more overlap in tooling used there

* Following the kilo reform, the stable team is now focused on defining
and enforcing a common stable branch policy[1], rather than approving
every patch. Being more visible and having more dedicated members can
only help in that very specific mission

* The release team is now headed by Doug Hellmann, who is focused on
release management and does not have the history I had with stable
branch policy. So it might be the right moment to refocus release
management solely on release management and get the stable team its own
leadership

* Empowering that team to make its own decisions, giving it more
visibility and recognition will hopefully lead to more resources being
dedicated to it

* If the team expands, it could finally own stable branch health and
gate fixing. If that ends up all falling under the same roof, that team
could make decisions on support timeframes as well, since it will be the
primary resource to make that work


Isn't this kind of already what the stable maint team does? Well, that 
and some QA people like mtreinish and sdague.




So.. good idea ? bad idea ? What do current stable-maint-core[2] members
think of that ? Who thinks they could step up to lead that team ?

[1] http://docs.openstack.org/project-team-guide/stable-branches.html
[2] https://review.openstack.org/#/admin/groups/530,members



With the decentralizing of the stable branch stuff in Liberty [1] it 
seems like there would be less use for a PTL for stable branch 
maintenance - the cats are now herding themselves, right? Or at least 
that's the plan as far as I understood it. And the existing stable 
branch wizards are more or less around for help and answering questions.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078281.html


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominations open for the N and O names of OpenStack

2015-11-09 Thread Adam Lawson
Norris! As in ... Chuck Norris lives in Texas.

Just saying. ; )
On Nov 9, 2015 5:31 AM, "Monty Taylor"  wrote:

> Hey everybody!
>
> It's release naming time, and this time we get to do two at once!
>
> If you'd like to propose a name, there are two wiki pages:
>
> For the N release, where the geographic region is "Texas Hill Country":
>
> https://wiki.openstack.org/wiki/Release_Naming/N_Proposals
>
> For the O release, where the geographic region is "Catalonia":
>
> https://wiki.openstack.org/wiki/Release_Naming/O_Proposals
>
> We're going to keep proposals open until 2015-11-15 23:59:59 UTC, and
> voting will start 2015-11-30. The time in between proposals closing and
> voting starting is to allow for a TC meeting to approve any exceptional
> names that people propose, and then to not attempt to have a vote spanning
> the US Thanksgiving holiday.
>
> Have fun!
>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?

2015-11-09 Thread Vikram Choudhary
Hi Cathy,

Could you please check on this. My mother passed away yesterday and I will
be on leave for couple of weeks.

Thanks
Vikram
On 09-Nov-2015 6:15 pm, "Ihar Hrachyshka"  wrote:

> Thanks Thomas, much appreciated.
>
> I need to admit that we haven’t heard from SFC folks just yet. I will try
> to raise awareness that we wait for their feedback today on team meeting.
> Adding [sfc] tag to the topic to get more attention.
>
> Ihar
>
> Thomas Morin  wrote:
>
> Hi Ihar,
>>
>> Ihar Hrachyshka :
>>
>>> Reviving the thread.
>>> [...] (I appreciate if someone checks me on the following though):
>>>
>>
>> This is an excellent recap.
>>
>>  I set up a new etherpad to collect feedback from subprojects [2].
>>>
>>
>> I've filled in details for networking-bgpvpn.
>> Please tell me if you need more information.
>>
>> Once we collect use cases there and agree on agent API for extensions
>>> (even if per agent type), we will implement it and define as stable API,
>>> then pass objects that implement the API into extensions thru extension
>>> manager. If extensions support multiple agent types, they can still
>>> distinguish between which API to use based on agent type string passed into
>>> extension manager.
>>>
>>> I really hope we start to collect use cases early so that we have time
>>> to polish agent API and make it part of l2 extensions earlier in Mitaka
>>> cycle.
>>>
>>
>> We'll be happy to validate the applicability of this approach as soon as
>> something is ready.
>>
>> Thanks for taking up this work!
>>
>> -Thomas
>>
>>
>>
>> Ihar Hrachyshka  wrote:
>>>
>>> On 30 Sep 2015, at 12:53, Miguel Angel Ajo  wrote:
>
>
>
> Ihar Hrachyshka wrote:
>
>> On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:
>>>
>>> Hi Ihar,
>>>
>>> Ihar Hrachyshka :
>>>
 Miguel Angel Ajo :
>
>> Do you have a rough idea of what operations you may need to do?
>>
> Right now, what bagpipe driver for networking-bgpvpn needs to
> interact with is:
> - int_br OVSBridge (read-only)
> - tun_br OVSBridge (add patch port, add flows)
> - patch_int_ofport port number (read-only)
> - local_vlan_map dict (read-only)
> - setup_entry_for_arp_reply method (called to add static ARP
> entries)
>
 Sounds very tightly coupled to OVS agent.

> Please bear in mind, the extension interface will be available
>> from different agent types
>> (OVS, SR-IOV, [eventually LB]), so this interface you're talking
>> about could also serve as
>> a translation driver for the agents (where the translation is
>> possible), I totally understand
>> that most extensions are specific agent bound, and we must be
>> able to identify
>> the agent we're serving back exactly.
>>
> Yes, I do have this in mind, but what we've identified for now
> seems to be OVS specific.
>
 Indeed it does. Maybe you can try to define the needed pieces in
 high level actions, not internal objects you need to access to. Like ‘-
 connect endpoint X to Y’, ‘determine segmentation id for a network’ 
 etc.

>>> I've been thinking about this, but would tend to reach the
>>> conclusion that the things we need to interact with are pretty hard to
>>> abstract out into something that would be generic across different 
>>> agents.
>>> Everything we need to do in our case relates to how the agents use 
>>> bridges
>>> and represent networks internally: linuxbridge has one bridge per 
>>> Network,
>>> while OVS has a limited number of bridges playing different roles for 
>>> all
>>> networks with internal segmentation.
>>>
>>> To look at the two things you  mention:
>>> - "connect endpoint X to Y" : what we need to do is redirect the
>>> traffic destinated to the gateway of a Neutron network, to the thing 
>>> that
>>> will do the MPLS forwarding for the right BGP VPN context (called VRF), 
>>> in
>>> our case br-mpls (that could be done with an OVS table too) ; that 
>>> action
>>> might be abstracted out to hide the details specific to OVS, but I'm not
>>> sure on how to  name the destination in a way that would be agnostic to
>>> these details, and this is not really relevant to do until we have a
>>> relevant context in which the linuxbridge would pass packets to 
>>> something
>>> doing MPLS forwarding (OVS is currently the only option we support for 
>>> MPLS
>>> forwarding, and it does not really make sense to mix linuxbridge for
>>> Neutron L2/L3 and OVS for MPLS)
>>> - "determine segmentation id for a network": this is something
>>> really OVS-agent-specific, the linuxbridge agent uses 

Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-09 Thread Adam Heczko
Dmitry,
+1

Do you plan to port your patchset to future Fuel releases?

A.

On Tue, Nov 10, 2015 at 12:14 AM, Dmitry Nikishov 
wrote:

> Hey guys.
>
> I've been working on making Fuel not to rely on superuser privileges
> at least for day-to-day operations. These include:
> a) running Fuel services (nailgun, astute etc)
> b) user operations (create env, deploy, update, log in)
>
> The reason for this is that many security policies simply do not
> allow root access (especially remote) to servers/environments.
>
> This feature/enhancement means that anything that currently is being
> run under root, will be evaluated and, if possible, put under a
> non-privileged
> user. This also means that remote root access will be disabled.
> Instead, users will have to log in with "fueladmin" user.
>
> Together with Omar  we've put together a blueprint[0] and a
> spec[1] for this feature. I've been developing this for Fuel 6.1, so there
> are two patches into fuel-main[2] and fuel-library[3] that can give you an
> impression of current approach.
>
> These patches do following:
> - Add fuel-admin-user package, which creates 'fueladmin'
> - Make all other fuel-* packages depend on fuel-admin-user
> - Put supervisord under 'fueladmin' user.
>
> Please review the spec/patches and let's have a discussion on the approach
> to
> this feature.
>
> Thank you.
>
> [0] https://blueprints.launchpad.net/fuel/+spec/fuel-nonsuperuser
> [1] https://review.openstack.org/243340
> [2] https://review.openstack.org/243337
> [3] https://review.openstack.org/243313
>
> --
> Dmitry Nikishov,
> Deployment Engineer,
> Mirantis, Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-09 Thread Matthew Treinish
On Mon, Nov 09, 2015 at 05:05:36PM +1100, Tony Breeds wrote:
> On Fri, Nov 06, 2015 at 10:12:21AM -0800, Clint Byrum wrote:
> 
> > The argument in the original post, I think, is that we should not
> > stand in the way of the vendors continuing to collaborate on stable
> > maintenance in the upstream context after the EOL date. We already have
> > distro vendors doing work in the stable branches, but at EOL we push
> > them off to their respective distro-specific homes.
> 
> That is indeed a better summary than I started with.
> 
> I have a half formed idea that creates a state between the current EOL (where
> we delete the branches) to what we have today (where we have full 
> CI/VMT/Release
> management.

So this idea has come up before and it has been rejected for good reasons. If
vendors want to collaborate, but not to actually work to keep things working in
the gate is that really collaboration? It's just vendors pushing whatever random
backports they want into a shared repo. There is a reason we do all the testing
and it's deeply encoded into the openstack culture. If managing to keep things
verifiably working with the same set of test jobs when a branch was created is
too much of a burden for people then there isn't a reason to keep it around.

This is the crux of why we have shorter branch support windows. No matter how
much complaining people do about how they want LTS releases or longer support
windows, and we'll have world peace when they can use Grizzly with upstream
support for 17 years it doesn't change that barely anyone ever steps up to work
on keeping the gate working on stable branches.

Tony, as someone who has done a good job coming up to speed on fixing issues on 
the
stable branches you know this firsthand. We're always end up talking to the same
handful of people to debug issues. We're also almost always in firefighting mode
and regular failure rates on stable just keep going up when we look away. People
also burn out quickly debugging these issues all the time. Personally, I know I
don't keep an eye on things nearly as closely as I did before.

> 
> There is a massive ammount of trust involved in that simple statement and I 
> don't
> underestimate that.
> 
> This would provide a place where interested vendors can work together.
> We could disable grenade in juno which isn't awesome but removes the gotta 
> land
> patch in juno, kilo and liberty to unblock the master gate.
> We could reduce the VMT impact by nominating a point of contact for juno and
> granting that person access to the embargoed bugs.  Similarly we could
> trust/delegate a certain about of the release management to that team (I say
> team but so far we've only got one volunteer)
> 
> We can't ignore the fact that fixing things in Juno *may* still require fixes
> in Kilo (and later releases) esp. given the mess that is requirements in
> stable/juno
>  
> > As much as I'd like everyone to get on the CD train, I think it might
> > make sense to enable the vendors to not diverge, but instead let them
> > show up with people and commitment and say "Hey we're going to keep
> > Juno/Mitaka/etc alive!".
> > 
> > So perhaps what would make sense is defining a process by which they can
> > make that happen.
> > 
> > Note that it's not just backporters though. It's infra resources too.
> 
> Sure.  CI and Python 2.6 I have a little understanding of.  I guess I can
> extrapolate the additional burden on {system,project}-config.  I willingly
> admit I don't have a detailed feel for what I'm asking here.

It's more than just that too, there is additional resources on Tempest and other
QA projects like devstack and grenade too. Tempest is branchless, so to keep
branches around longer you have to have additional jobs running on tempest to
ensure incoming changes work across all releases. In Tokyo we were discussing
doing the same thing on the client repos too because they make similar
guarantees about backwards compat but we never test it. There is a ton of extra
load generated by keeping things around for longer, it's not to be taken
lightly. Especially given the historical lack of contribution in this space.
This is honestly why our 1 experiment in a longer support ended in failure,
nobody stepped up to support the extra branch. To even attempt it again we need
proof that things have improved, which it clearly hasn't.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] image.exists events

2015-11-09 Thread Flavio Percoco

On 09/11/15 15:57 +0200, Cristian Tomoiaga wrote:

Hello,

I recently created a script to generate image.exists events similar to what is
described here:
https://blueprints.launchpad.net/glance/+spec/glance-exists-notification

I am wondering if it's a good idea to finish the implementation described in
that spec ?

image.exists events are useful especially for resource accounting/billing (what
images were active for a tenant 7 months ago for example).

(see nova instance.exists events or cinder, neutron and probably other
projects)


Hi Christian,

It'd be great if you could propose this as a spec in Glance:
http://specs.openstack.org/openstack/glance-specs/

That way, we'll be able to provide a better feedback on the actual
need and timing.

Overall, I'd say this is something we may need in Glance.

Thanks,
Flavio



-- 
Cristian Tomoiaga



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][fuel] How can I install Redhat-OSP using Fuel

2015-11-09 Thread Fei LU
Greeting Fuel teams,

My company is working on the installation of virtualization infrastructure, and 
we have noticed Fuel is a great tool, much better than our own installer. The 
question is that Mirantis is currently supporting OpenStack on CentOS and 
Ubuntu, while my company is using Redhat-OSP.
I have read all the Fuel documents, including fuel dev doc, but I haven't found 
the solution how can I add my own release into Fuel. Or maybe I'm missing 
something.
So, would you guys please give some guide or hints?
Appreciating any help.Kane__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-09 Thread Matthew Treinish
On Mon, Nov 09, 2015 at 10:54:43PM +, Kuvaja, Erno wrote:
> > On Mon, Nov 09, 2015 at 05:28:45PM -0500, Doug Hellmann wrote:
> > > Excerpts from Matt Riedemann's message of 2015-11-09 16:05:29 -0600:
> > > >
> > > > On 11/9/2015 10:41 AM, Thierry Carrez wrote:
> > > > > Hi everyone,
> > > > >
> > > > > A few cycles ago we set up the Release Cycle Management team which
> > > > > was a bit of a frankenteam of the things I happened to be leading:
> > > > > release management, stable branch maintenance and vulnerability
> > management.
> > > > > While you could argue that there was some overlap between those
> > > > > functions (as in, "all these things need to be released") logic
> > > > > was not the primary reason they were put together.
> > > > >
> > > > > When the Security Team was created, the VMT was spinned out of the
> > > > > Release Cycle Management team and joined there. Now I think we
> > > > > should spin out stable branch maintenance as well:
> > > > >
> > > > > * A good chunk of the stable team work used to be stable point
> > > > > release management, but as of stable/liberty this is now done by
> > > > > the release management team and triggered by the project-specific
> > > > > stable maintenance teams, so there is no more overlap in tooling
> > > > > used there
> > > > >
> > > > > * Following the kilo reform, the stable team is now focused on
> > > > > defining and enforcing a common stable branch policy[1], rather
> > > > > than approving every patch. Being more visible and having more
> > > > > dedicated members can only help in that very specific mission
> > > > >
> > > > > * The release team is now headed by Doug Hellmann, who is focused
> > > > > on release management and does not have the history I had with
> > > > > stable branch policy. So it might be the right moment to refocus
> > > > > release management solely on release management and get the stable
> > > > > team its own leadership
> > > > >
> > > > > * Empowering that team to make its own decisions, giving it more
> > > > > visibility and recognition will hopefully lead to more resources
> > > > > being dedicated to it
> > > > >
> > > > > * If the team expands, it could finally own stable branch health
> > > > > and gate fixing. If that ends up all falling under the same roof,
> > > > > that team could make decisions on support timeframes as well,
> > > > > since it will be the primary resource to make that work
> > > >
> > > > Isn't this kind of already what the stable maint team does? Well,
> > > > that and some QA people like mtreinish and sdague.
> > > >
> > > > >
> > > > > So.. good idea ? bad idea ? What do current stable-maint-core[2]
> > > > > members think of that ? Who thinks they could step up to lead that
> > team ?
> > > > >
> > > > > [1]
> > > > > http://docs.openstack.org/project-team-guide/stable-branches.html
> > > > > [2] https://review.openstack.org/#/admin/groups/530,members
> > > > >
> > > >
> > > > With the decentralizing of the stable branch stuff in Liberty [1] it
> > > > seems like there would be less use for a PTL for stable branch
> > > > maintenance - the cats are now herding themselves, right? Or at
> > > > least that's the plan as far as I understood it. And the existing
> > > > stable branch wizards are more or less around for help and answering
> > questions.
> > >
> > > The same might be said about releasing from master and the release
> > > management team. There's still some benefit to having people dedicated
> > > to making sure projects all agree to sane policies and to keep up with
> > > deliverables that need to be released.
> > 
> > Except the distinction is that relmgt is actually producing something. 
> > Relmgt
> > has the releases repo which does centralize library releases, reno to do the
> > release notes, etc. What does the global stable core do? Right now it's 
> > there
> > almost entirely to just add people to the project specific stable core 
> > teams.
> > 
> > -Matt Treinish
> 
> 
> I'd like to move the discussion from what are the roles of the current 
> stable-maint-core and more towards what the benefits would be having a 
> stable-maint team rather than the -core group alone.
> 
> Personally I think the stable maintenance should be quite a lot more than 
> unblocking gate and approving people allowed to merge to the stable branches.
> 

Sure, but that's not we're talking about here are we? The other tasks, like
backporting changes for example, have been taken on by project teams. Even in
your other email you mentioned that you've been doing backports and other tasks
that you consider stable maint in a glance only context. That's something we
changed in kilo which ttx referenced in [1] to enable that to happen, and it was
the only way to scale things.

The discussion here is about the cross project effort around stable branches,
which by design is a more limited scope now. Right now the cross project effort
around stable branch policy is really 2 things (both of 

Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2015-11-09 Thread Andrew Beekhof

> On 23 Oct 2015, at 7:01 PM, Bogdan Dobrelya  wrote:
> 
> Hello.
> I'm glad to announce that the pacemaker OCF resource agent for the
> rabbitmq clustering, which was born in the Fuel project initially, now
> available and maintained upstream! It will be shipped with the
> rabbitmq-server 3.5.7 package (release by November, 2015)
> 
> You can read about this OCF agent in the official guide [0] (flow charts
> for promote/demote/start/stop actions in progress).

Sounds interesting, can you give any comment about how it differs to the 
other[i] upstream agent?
Am I right that this one is effectively A/P and wont function without some kind 
of shared storage?
Any particular reason you went down this path instead of full A/A?

[i] 
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster

> 
> And you can try it as a tiny cluster example with a Vagrant box for
> Atlas [1]. Note, this only installs an Ubuntu box with a
> Corosync/Pacemaker & RabbitMQ clusters running, no Fuel or OpenStack
> required :-)
> 
> I'm also planning to refer this official RabbitMQ cluster setup guide in
> the OpenStack HA guide as well [2].
> 
> PS. Original rabbitmq-users mail thread is here [3].
> [openstack-operators] cross posted as well.
> 
> [0] http://www.rabbitmq.com/pacemaker.html
> [1] https://atlas.hashicorp.com/bogdando/boxes/rabbitmq-cluster-ocf
> [2] https://bugs.launchpad.net/openstack-manuals/+bug/1497528
> [3] https://groups.google.com/forum/#!topic/rabbitmq-users/BnoIQJb34Ao
> 
> -- 
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-09 Thread Matt Riedemann



On 11/9/2015 12:21 AM, Tony Breeds wrote:

On Fri, Nov 06, 2015 at 06:42:05PM +, Jeremy Stanley wrote:

On 2015-11-06 10:12:21 -0800 (-0800), Clint Byrum wrote:
[...]

Note that it's not just backporters though. It's infra resources too.


Aye, there's the rub. We don't just EOL these branches for fun or
because we hate old things or because ooh shiny squirrel. We EOL
them at a cadence where the community has demonstrated it loses its
ability to keep them healthy and testable (evaluated based on
performance over recent prior cycles because we have to warn
downstreams well in advance as to when they should expect upstream
support to cease).


Sure and Juno is in bad shape, as Matt will attest but I think it can be fixed
;P


Downstream maintainers regularly claim they will step up their
assistance upstream to keep stable branches alive if only we'll
extend the lifespan on them, so we tried that with Icehouse and,
based on our experience there, scaled back the lifespan of Juno
again accordingly. Keep in mind that extending support of stable
branches necessarily implies supporting a larger _number_ of stable
branches in parallel. If we switched from 12 months after release to
18 then we're maintaining at least 3 stable branches at any point in
time. If we extend it to 24 months then that's 4 stable branches.


Yes I agree and frankly I'm disappointed that the support I received in Tokyo
hasn't arrived on this thread (yet).  As to the number of stable branches,
I'm nervous about this.  I don't really want an additional period of
life-support for Juno to mandate a similar commitment for Kilo.

I realise it's a slippery slope.


To those who suggest solving this by claiming one is a LTS release
every couple years, you're implying a vastly different upgrade model
than we have now. If we declare Juno is a LTS and leave it supported
another 12 months, then 6 months from now when we EOL stable/kilo
we'll be telling deployers that they have to upgrade from supported
stable/juno through unsupported stable/kilo to supported
stable/icehouse before running stable/mitaka. Or else you're saying
you intend to fix the current inability of our projects to skip
intermediate releases entirely during upgrades (a great idea, and so
I'm thrilled by those of you who intend to make it a reality, we can
revisit the LTS discussion once you finish that).


I carefully *didn't* suggest and OpenStack LTS :)

Yours Tony.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Given the requirements whack-a-mole in stable/juno, and to an extent in 
stable/kilo, I just don't think it's really worth keeping alive what we 
have for stable/juno right now.


I think once we end of life Kilo we'll have a much better idea of how 
stable/liberty is going with upper-constraints. If we're not constantly 
putting out gate wedge issues due to capped requirements, that makes for 
a happier stable maint / QA / dev team that is then more willing to keep 
things around longer because they just work (unit test jobs, grenade 
jobs and tempest/dsvm jobs).


Keeping the branches around longer if they are working isn't so bad - 
you only consume CI resources as needed when changes on stable are 
proposed and updated, which is like the experimental queue on master. 
There is the nightly periodic jobs but that's just once a day.


As already noted, the other hit is branchless Tempest running stable 
compat jobs, which is a problem for anyone trying to get Tempest changes 
to land. Think of the race bugs you have to recheck against just for 
master, then multiply that times however many stable branches you have 
to maintain and that's what a Tempest change has to pass through. We 
could also branch Tempest at a certain point though. Tempest has tags 
that correspond to release milestones for the core projects. Internally 
we create branches for Tempest to align with the branches that are EOL 
upstream so we can fix requirements issues as needed (like capping 
requirements on grizzly or havana, for example).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Sripriya Seetharam to Tacker core

2015-11-09 Thread HADDLETON, Robert W (Bob)
+1

Well deserved!

Bob

On Nov 9, 2015, at 8:23 PM, Sridhar Ramaswamy 
> wrote:

I'd like to propose Sripriya Seetharam to join the Tacker core team. Sripriya
ramped up quickly in early Liberty cycle and had become an expert in the Tacker
code base. Her major contributions include landing MANO API blueprint,
introducing unit test framework along with the initial unit-tests and tirelessly
squashing hard to resolve bugs (including chasing the recent nova-neutron goose
hunt). Her reviews are solid fine tooth comb and constructive [1].

I'm glad to welcome Sripriya to the core team. Current cores members, please 
vote
with your +1 / -1.

[1] 
http://stackalytics.com/?release=libertyuser_id=sseethaproject_type=openstack-others
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-09 Thread Robert Collins
On 10 November 2015 at 08:22, matt  wrote:
> tons from what i've seen.  there are a LOT of havana and even earlier stuff
> out there.  essex is still out there in the wild.

>From the Mitaka keynotes we know there are substantial sized public
clouds still in production running Diablo...

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] gerrit/git review problem error code 10061

2015-11-09 Thread Jeremy Stanley
On 2015-11-09 10:13:33 +0800 (+0800), Baohua Yang wrote:
> Anyone recently meet such problem after cloning the latest code
> from kuryr? Try proxy also, but not solved.
[...]
> The following command failed with exit code 1
> "scp -P29418 yangbao...@review.openstack.org:hooks/commit-msg
> .git\hooks\commit-msg"
> ---
> FATAL: Unable to connect to relay host, errno=10061
> ssh_exchange_identification: Connection closed by remote host
[...]

I've checked our Gerrit SSH API authentication logs from the past 30
days and find no record of any yangbaohua authenticating. Chances
are this is a broken local proxy or some sort of intercepting
firewall which is preventing your 29418/tcp connection from even
reaching review.openstack.org.

If you use Telnet or NetCat to connect to port 29418 on
review.openstack.org directly, do you see an SSH banner starting
with a string like "SSH-2.0-GerritCodeReview_2.8.4-19-g4548330
(SSHD-CORE-0.9.0.201311081)" or something else?
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-11-09 Thread John Trowbridge
In general, I think this is a good idea. Rather than putting the logic
for this in tripleoclient, it would be better to put it in
tripleo-common. Then the --assign-profiles just calls the function
imported from tripleo-common. This way the GUI could consume the same logic.

On 11/09/2015 09:51 AM, Dmitry Tantsur wrote:
> Hi folks!
> 
> I spent some time thinking about bringing profile matching back in, so
> I'd like to get your comments on the following near-future plan.
> 
> First, the scope of the problem. What we do is essentially kind of
> capability discovery. We'll help nova scheduler with doing the right
> thing by assigning a capability like "suits for compute", "suits for
> controller", etc. The most obvious path is to use inspector to assign
> capabilities like "profile=1" and then filter nodes by it.
> 
> A special care, however, is needed when some of the nodes match 2 or
> more profiles. E.g. if we have all 4 nodes matching "compute" and then
> only 1 matching "controller", nova can select this one node for
> "compute" flavor, and then complain that it does not have enough hosts
> for "controller".
> 
> We also want to conduct some sanity check before even calling to
> heat/nova to avoid cryptic "no valid host found" errors.
> 
> (1) Inspector part
> 
> During the liberty cycle we've landed a whole bunch of API's to
> inspector that allow us to define rules on introspection data. The plan
> is to have rules saying, for example:
> 
>  rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
>  rule 2: if local_gb >= 100, add capability "controller_profile=1"
> 
> Note that these rules are defined via inspector API using a JSON-based
> DSL [1].
> 
> As you see, one node can receive 0, 1 or many such capabilities. So we
> need the next step to make a final decision, based on how many nodes we
> need of every profile.
> 
> (2) Modifications of `overcloud deploy` command: assigning profiles
> 
> New argument --assign-profiles will be added. If it's provided,
> tripleoclient will fetch all ironic nodes, and try to ensure that we
> have enough nodes with all profiles.
> 
> Nodes with existing "profile:xxx" capability are left as they are. For
> nodes without a profile it will look at "xxx_profile" capabilities
> discovered on the previous step. One of the possible profiles will be
> chosen and assigned to "profile" capability. The assignment stops as
> soon as we have enough nodes of a flavor as requested by a user.
> 
> (3) Modifications of `overcloud deploy` command: validation
> 
> To avoid 'no valid host found' errors from nova, the deploy command will
> fetch all flavors involved and look at the "profile" capabilities. If
> they are set for any flavors, it will check if we have enough ironic
> nodes with a given "profile:xxx" capability. This check will happen
> after profiles assigning, if --assign-profiles is used.
> 
> Please let me know what you think.
> 
> [1] https://github.com/openstack/ironic-inspector#introspection-rules
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday November 10th at 19:00 UTC

2015-11-09 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday November 10th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes:http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-11-03-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-11-03-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-11-03-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-09 Thread Doug Wiegley

> On Nov 5, 2015, at 1:24 PM, Shraddha Pandhe  
> wrote:
> 
> Hi,
> 
> I agree with all of you about the REST Apis.
> 
> As I said before, I had to bring up the idea of JSON blob because based on 
> previous discussions, it looked like neutron community was not willing to 
> enhance the schemas for different ipam dbs. Entire rationale behind pluggable 
> IPAM is to provide flexibility. So, community should be open to ideas for 
> enhancing the schema to incorporate more information in the db tables. I 
> would be extremely happy if use cases for different companies are considered 
> and schema is enhanced to include specific columns in db  schemas instead of 
> a column with random JSON blob.

I’d be careful on nomenclature here. What you indicate with the blobs becomes 
an api change, and thus has all the warts previously mentioned.

If what you want is simply a way for a specific driver/plugin/vendor to store 
more data, that can be done with a driver/plugin/vendor specific db table, with 
an association id back to the general table. That can be done now, without any 
api impact or approval.

So are you saying you want to extend the db schema dynamically (allowed today), 
or you want to extend the public interface dynamically (not allowed, except in 
the case of entirely new api extensions) ?

Thanks,
doug

> 
> Lets pick up subnets db table for example. We have some use cases where it 
> would be great if following information is associated with the subnet db table
> 
> 1. Rack switch info
> 2. Backplane info
> 3. DHCP ip helpers
> 4. Option to tag allocation pools inside subnets
> 5. Multiple gateway addresses
> 
> We also want to store some information about the backplanes locally, so a 
> different table might be useful.
> 
> In a way, this information is not specific to our company. Its generic 
> information which ought to go with the subnets. Different companies can use 
> this information differently in their IPAM drivers. But, the information 
> needs to be made available to justify the flexibility of ipam
> 
> In Yahoo! OpenStack is still not the source of truth for this kind of 
> information and database limitation is one of the reasons. I would prefer to 
> avoid having our own database to make sure that our use-cases are always 
> shared with the community.
> 
> 
> 
> 
> 
> 
> 
> 
> On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery  > wrote:
> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  > wrote:
> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
> Hi Salvatore,
> 
> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
> make IPAM much more powerful. Some other projects already do things like
> this.
> 
> :( Actually, though "powerful" it also leads to implementation details 
> leaking directly out of the public REST API. I'm very negative on this and 
> would prefer an actual codified REST API that can be relied on regardless of 
> backend driver or implementation.
> 
> I agree with Jay here. We've had people propose similar things in Neutron 
> before, and I've been against them. The entire point of the Neutron REST API 
> is to not leak these details out. It dampens the strength of the logical 
> model, and it tends to have users become reliant on backend implementations.
>  
> 
> e.g. In Ironic, node has driver_info, which is JSON. it also has an
> 'extras' arbitrary JSON field. This allows us to put any information in
> there that we think is important for us.
> 
> Yeah, and this is a bad thing, IMHO. Public REST APIs should be structured, 
> not a Wild West free-for-all. The biggest problem with using free-form JSON 
> blobs in RESTful APIs like this is that you throw away the ability to evolve 
> the API in a structured, versioned way. Instead of evolving the API using 
> microversions, instead every vendor just jams whatever they feel like into 
> the JSON blob over time. There's no way for clients to know what the server 
> will return at any given time.
> 
> Achieving consensus on a REST API that meets the needs of a variety of 
> backend implementations is *hard work*, yes, but it's what we need to do if 
> we are to have APIs that are viewed in the industry as stable, discoverable, 
> and reliably useful.
> 
> ++, this is the correct way forward.
> 
> Thanks,
> Kyle
>  
> 
> Best,
> -jay
> 
> Best,
> -jay
> 
> Hoping to get some positive feedback from API and DB lieutenants too.
> 
> 
> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>  
> >> wrote:
> 
> Arbitrary blobs are a powerful tools to circumvent limitations of an
> API, as well as other constraints which might be imposed for
> versioning or portability purposes.
> The parameters that should end up in such blob are typically
> specific for the 

Re: [openstack-dev] [Nova] SR-IOV subteam

2015-11-09 Thread Beliveau, Ludovic
Is there a meeting planned for this week ?

Thanks,
/ludovic

On 11/03/2015 12:02 AM, Nikola Đipanov wrote:
> Hello Nova,
>
> Looking at Mitaka specs, but also during the Tokyo design summit
> sessions, we've seen several discussions and requests for enhancements
> to the Nova SR-IOV functionality.
>
> It has been brought up during the Summit that we may want to organize as
> a subteam to track all of the efforts better and make sure we get all
> the expert reviews on stuff more quickly.
>
> I have already added an entry on the subteams page [1] and on the
> reviews etherpad for Mitaka [2]. We may also want to have a meeting
> slot. As I am out for the week, I'll let others propose a time for it
> (that will hopefully work for all interested parites and their
> timezones) and we can take it from there next week.
>
> As always - comments and suggestions much appreciated.
>
> Many thanks,
> Nikola
>
> [1] https://wiki.openstack.org/wiki/Nova#Nova_subteams
> [2] https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-09 Thread Diana Clarke
On Mon, Nov 9, 2015 at 11:15 AM, Markus Zoeller  wrote:
> Regarding "non-bug" areas where you can contribute, there are also some
> areas which called out explicitly for more help. I remember:
> * the API team [1] and
> * the notification/tasks area [2].
> Maybe it makes sense to reach out to them and ask how you can help there.
>
> [1] https://etherpad.openstack.org/p/mitaka-nova-api
> [2] https://etherpad.openstack.org/p/mitaka-nova-error-handling

Thanks for the new contributor tips, Markus!

Much appreciated,

--diana

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly Ironic QA/testing/3rd party CI meeting

2015-11-09 Thread John Villalovos
We have just started a weekly meeting to discuss Ironic QA, testing, and
3rd party CI

More info here:
https://wiki.openstack.org/wiki/Meetings/Ironic-QA

Weekly meeting on Wednesdays at 1700 UTC on #openstack-meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-09 Thread Tim Hinrichs
They shouldn't be combined because they can each be used without the
other.  That is, they each stand on their own.

Congress can be used for monitoring or delegating policy without attempting
to correct violations (i.e. without needing workflows).

Mistral can be used to make complex changes without writing a policy.

Tim





On Mon, Nov 9, 2015 at 8:57 AM Adam Young  wrote:

> On 11/09/2015 10:57 AM, Tim Hinrichs wrote:
>
> Congress happens to have the capability to run a script/API call under
> arbitrary conditions on the state of other OpenStack projects, which
> sounded like what you wanted.  Or did I misread your original question?
>
> Congress and Mistral are definitely not competing.Congress lets
> people declare which states of the other OpenStack projects are permitted
> using a general purpose policy language, but it does not try to make
> complex changes (often requiring a workflow) to eliminate prohibited
> states.  Mistral lets people create a workflow that makes complex changes
> to other OpenStack projects, but it doesn't have a general purpose policy
> language that describes which states are permitted.  Congress and Mistral
> are complementary, and each can stand on its own.
>
>
> And why should not these two things be in a single project?
>
>
>
>
> Tim
>
>
> On Mon, Nov 9, 2015 at 6:46 AM Adam Young  wrote:
>
>> On 11/06/2015 06:28 PM, Tim Hinrichs wrote:
>>
>> Congress allows users to write a policy that executes an action under
>> certain conditions.
>>
>> The conditions can be based on any data Congress has access to, which
>> includes nova servers, neutron networks, cinder storage, keystone users,
>> etc.  We also have some Ceilometer statistics; I'm not sure about whether
>> it's easy to get the Keystone notifications that you're talking about
>> today, but notifications are on our roadmap.  If the user's login is
>> reflected in the Keystone API, we may already be getting that event.
>>
>> The action could in theory be a mistral/heat API or an arbitrary script.
>> Right now we're set up to invoke any method on any of the python-clients
>> we've integrated with.  We've got an integration with heat but not
>> mistral.  New integrations are typically easy.
>>
>>
>> Sounds like Mistral and Congress are competing here, then.  Maybe we
>> should merge those efforts.
>>
>>
>>
>> Happy to talk more.
>>
>> Tim
>>
>>
>>
>> On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann 
>> wrote:
>>
>>> Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
>>> > On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann 
>>> wrote:
>>> >
>>> > > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
>>> > > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
>>> > > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
>>> > > > > > Can people help me work through the right set of tools for
>>> this use
>>> > > case
>>> > > > > > (has come up from several Operators) and map out a plan to
>>> implement
>>> > > it:
>>> > > > > >
>>> > > > > > Large cloud with many users coming from multiple Federation
>>> sources
>>> > > has
>>> > > > > > a policy of providing a minimal setup for each user upon first
>>> visit
>>> > > to
>>> > > > > > the cloud:  Create a project for the user with a minimal
>>> quota, and
>>> > > > > > provide them a role assignment.
>>> > > > > >
>>> > > > > > Here are the gaps, as I see it:
>>> > > > > >
>>> > > > > > 1.  Keystone provides a notification that a user has logged
>>> in, but
>>> > > > > > there is nothing capable of executing on this notification at
>>> the
>>> > > > > > moment.  Only Ceilometer listens to Keystone notifications.
>>> > > > > >
>>> > > > > > 2.  Keystone does not have a workflow engine, and should not be
>>> > > > > > auto-creating projects.  This is something that should be
>>> performed
>>> > > via
>>> > > > > > a Heat template, and Keystone does not know about Heat, nor
>>> should
>>> > > it.
>>> > > > > >
>>> > > > > > 3.  The Mapping code is pretty static; it assumes a user entry
>>> or a
>>> > > > > > group entry in identity when creating a role assignment, and
>>> neither
>>> > > > > > will exist.
>>> > > > > >
>>> > > > > > We can assume a special domain for Federated users to have
>>> per-user
>>> > > > > > projects.
>>> > > > > >
>>> > > > > > So; lets assume a Heat Template that does the following:
>>> > > > > >
>>> > > > > > 1. Creates a user in the per-user-projects domain
>>> > > > > > 2. Assigns a role to the Federated user in that project
>>> > > > > > 3. Sets the minimal quota for the user
>>> > > > > > 4. Somehow notifies the user that the project has been set up.
>>> > > > > >
>>> > > > > > This last probably assumes an email address from the Federated
>>> > > > > > assertion.  Otherwise, the user hits Horizon, gets a "not
>>> > > authenticated
>>> > > > > > for any projects" error, and is stumped.
>>> > > > > >

Re: [openstack-dev] [kolla][tripleo] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-09 Thread Zane Bitter

On 04/11/15 16:26, Michal Rostecki wrote:

On 11/03/2015 10:27 PM, Zane Bitter wrote:

I think we all agree that using something _like_ Kubernetes would be
extremely interesting for controller services, where you have a bunch of
heterogeneous services with scheduling constraints (HA), that may need
to be scaled out at different rates,  

IMHO it's not interesting at all for compute nodes though, where the
scheduling is not only fixed but well-defined in advance. (It's... one
compute node per compute node. Duh.)

e.g. I could easily imagine a future containerised TripleO where the
controller services were deployed with Magnum but the compute nodes were
configured directly with Heat software deployments.

In such a scenario the fact that you can't use Kubernetes for compute
nodes diminishes its value not at all. So while I'm guessing net=host is
still a blocker (for Neutron services on the controller - although
another message in this thread suggests that K8s now supports it
anyway), I don't think pid=host needs to be since AFAICT it appears to
be required only for libvirt.

Something to think about...



One of the goals of Kolla (and idea of containerizing OpenStack services
in general) is to simplify upgrades. Scaling and scheduling are
obviously important points of Kolla, but they are not the only.

The model of upgrade where images of nova-compute, Neutron agents etc.
are build once, pushed to registry and then pulled on compute nodes
looks much better for me than traditional upgrade of packages. It also
may decrease probability of breaking some common dependency during
upgrades.


I don't disagree with any of that, but I'm struggling to figure out how 
it has anything to do with what I wrote. Did you accidentally reply to 
the wrong message?


- ZB


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Performance Team summit session results

2015-11-09 Thread Mark Wagner

For clarification, this is 3-4 PM (15:00 - 16:00) UTC, correct ?.

-mark

- Original Message -
> From: "Dina Belova" 
> To: "Matt Riedemann" 
> Cc: "OpenStack Development Mailing List" , 
> openstack-operat...@lists.openstack.org
> Sent: Monday, November 9, 2015 4:30:55 AM
> Subject: Re: [Openstack-operators] [openstack-dev] Performance Team summit 
> session results
> 
> Folks,
> 
> due to the doodle  3:00 -
> 4:00 UTC Tuesdays (starting from tomorrow) is ok for all voted people.
> Although for the US folks with PST time zone it'll be very early due to the
> time zone change happened for US on November, 1st. Still hope seeing you
> there on *#openstack-performance* channel :)
> 
> I've created primary wiki pages for the team
>  and its meetings
>  - please feel free
> to add more items to the agenda.
> 
> See you tomorrow :)
> 
> Cheers,
> Dina
> 
> 
> On Mon, Nov 9, 2015 at 5:38 PM, Dina Belova  wrote:
> 
> > Matt,
> >
> > thank you so much for covering [1], [2] and [3] points - I'll ping folks
> > who've written these lines directly and will try to find out the answers.
> >
> > Cheers,
> > Dina
> >
> > On Fri, Oct 30, 2015 at 1:42 AM, Matt Riedemann <
> > mrie...@linux.vnet.ibm.com> wrote:
> >
> >>
> >>
> >> On 10/29/2015 10:55 AM, Matt Riedemann wrote:
> >>
> >>>
> >>>
> >>> On 10/29/2015 9:30 AM, Dina Belova wrote:
> >>>
>  Hey folks!
> 
>  On Tuesday we had great summit session about performance team kick-off
>  and yesterday it was a great LDT session as well and I’m really glad to
>  see how much does the OpenStack performance topic is important for all
>  of us. 40 minutes session surely was not enough to analyse everyone’s
>  feedback and bottlenecks people usually see, so I’ll try to finalise
>  what have been discussed and the next steps in this email.
> 
>  Performance team kick-off session
>  (
>  https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off
>  )
> 
>  can be shortly described with the following points:
> 
>    * IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others were
>  taking part in the session
>    * Various tools are used right now for OpenStack benchmarking and
>  profiling right now:
>    o Rally (IBM, HP, Mirantis, Yahoo!)
>    o Shaker (Mirantis, merging its functionality to Rally right now)
>    o Gatling (Rackspace)
>    o Zipkin (Yahoo!)
>    o JMeter (Yandex)
>    o and others…
>    * Various issues have been seen during the OpenStack cloud operating
>  (full list can be found here -
>  https://etherpad.openstack.org/p/openstack-performance-issues).
>  Most
>  mentioned issues were the following:
>    o performance of DB-related layers (DB itself and oslo.db) - it is
>  about 7 abstraction DB layers in Nova; performance of Nova
>  conductor was mentioned several times
>    o performance of MQ-related layers (MQ itself and oslo.messaging)
>    * Different companies are using different standards for performance
>  benchmarking (both control plane and data plane testing)
>    * The most wished output from the team due to the comments will be:
>    o agree on the “performance testing standard”, including answers
>  on the following questions:
>    + what tools need to be used for OpenStack performance
>  benchmarking?
>    + what benchmarking meters need to be covered? what we would
>  like to compare?
>    + what scenarios need to be covered?
>    + how can we compare performance of different cloud
>  deployments?
>    + what performance deployment patterns can be used for various
>  workloads?
>    o share test plans and perform benchmarking tests
>    o create methodologies and documentation about best OpenStack
>  deployment and performance testing practices
> 
> 
>  We’re going to cover all these topics further. First of all IRC channel
>  for the discussions was created: *#openstack-performance*. We’re going
>  to have weekly meeting related to current progress on that channel,
>  doodle with the voting can be found here:
>  http://doodle.com/poll/wv6qt8eqtc3mdkuz#table
>    (I was brave enough not to include timeslots that were overlapping
>  with some of mine really hard-to-move activities :))
> 
>  Let’s have next week as a voting time, and have first IRC meeting in our
>  channel the week after next. We can start our further discussions 

[openstack-dev] [chef] Refactoring and cleanup of all cookbooks during the Mitaka cycle

2015-11-09 Thread Jan Klare
Hi everyone,

we have just merged the last patches to branch our stable/liberty release for 
all our openstack-chef cookbooks (starting right now). During the liberty cycle 
and at the mitaka design summit we discussed our goals for the current cycle 
and our next big steps. Since we identified the refactoring and cleanup as one 
of our main goals during this cycle, we are now getting it started. The biggest 
part of this process is written down in this spec:
http://specs.openstack.org/openstack/openstack-chef-specs/specs/liberty/all/refactor_config_templates.html
 

In addition to that, we are going to remove all of the unmaintained code from 
all cookbooks. Right now this means we are finally going to drop Suse support, 
since there was no contribution during the last 2 cycles to maintain the 
support and we agreed to not keep unmaintained code during the refactoring and 
cleanup process (if there is anybody out there who wants to keep this code, 
please step up and maintain it, we are happy to support you). Most of the other 
specific attributes (e.g. for vmware, ceph or docker) support will be moved 
from the basic service cookbooks to more specialised and opinionated wrapper 
cookbooks or recipes. Although we will drop a lot of code and feature support 
(switch case scenarios) initially in our basic service cookbooks, it should be 
very easy to add this support in wrapper cookbooks after that. We think that 
moving to a modular cookbook style with generic defaults is the best way of 
keeping the cookbooks maintain- and wrapable. Since these patches need to be 
finished in all cookbooks before they will work again, we agreed on the 
following steps for the patch process:
1) remove all unneeded/default attributes from the attributes file and template
2) move all specific attributes (e.g. vmware, ceph or docker) either to 
documentation or a specific recipe
3) cleanup the rest of the attributes by replacing the template with the new 
template logic (refactor_config_templates spec) and adapt the recipe where the 
template ressource is called
4) adapt the specs (unit tests) to work again
5) wait for the other cookbooks to get finished with the same steps to make 
proper integration testing possible again
(see also 
http://eavesdrop.openstack.org/meetings/openstack_chef/2015/openstack_chef.2015-11-09-16.01.log.html
 
)
As soon as we hit step 5 with all cookbooks, we will make additional small 
changes to get our integration testing working again and merge all of the 
changes as soon as this happens. This means that the biggest part of the 
cleanup and refactoring will happen in one commit to achieve a working set of 
cookbooks for integration testing (one commit but at least 5 patchsets to make 
it a little more reviewable). After this big patch, we will continue the 
refactoring process in the usual small steps before releasing a stable branch 
again. The current target for the next release is refactored_stable/liberty 
working at least on ubuntu14.04 and centos7 . We will not include support for 
distributions without at least one active contributor maintaining it after our 
initial cleanup step (might be added lateron). If you think there should be 
support for more distributions initially, please step up and we will try to 
support you as good as possible.

Cheers,
Jan 


irc: jklare
(openstack-chef PTL)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tripleo] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-09 Thread Steven Dake (stdake)


On 11/3/15, 2:27 PM, "Zane Bitter"  wrote:

>On 02/11/15 18:33, Steven Dake (stdake) wrote:
>>
>> Blame the core team :)  I suspect you will end up retrying a lot of
>> patterns we tried and failed with Kubernetes.  Kubernetes eventually was
>> found to be non-viable by the delivery of this 2 week project:
>>
>> https://github.com/sdake/compute-upgrade
>>
>> Documented in this blog:
>>
>> 
>>http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-comput
>>e-nodes/
>
>I don't recognise half of the names of tools y'all have been talking
>about here, but I can't help wondering whether the assumption that
>exactly one of these tools has to do all of the things has gone
>unchallenged.
>
>I think we all agree that using something _like_ Kubernetes would be
>extremely interesting for controller services, where you have a bunch of
>heterogeneous services with scheduling constraints (HA), that may need
>to be scaled out at different rates,  
>
>IMHO it's not interesting at all for compute nodes though, where the
>scheduling is not only fixed but well-defined in advance. (It's... one
>compute node per compute node. Duh.)
>
>e.g. I could easily imagine a future containerised TripleO where the
>controller services were deployed with Magnum but the compute nodes were
>configured directly with Heat software deployments.
>
>In such a scenario the fact that you can't use Kubernetes for compute
>nodes diminishes its value not at all. So while I'm guessing net=host is
>still a blocker (for Neutron services on the controller - although
>another message in this thread suggests that K8s now supports it
>anyway), I don't think pid=host needs to be since AFAICT it appears to
>be required only for libvirt.
>
>Something to think about...
>
>cheers,
>Zane.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Zane,

Not necessarily disagreeing with you that running the controller nodes
under a high-powered orchestration system and the compute nodes under some
other simple orchestration system (such as bash:) might be simpler, I
think what people do desire is one tool to rule them all (in this case
Mesos).  Running with Ansible does offer a tidy upgrade execution
environment for compute nodes.  This could be done without ansible/mesos
on those nodes, but it would be a bespoke solution specific to the problem
at hand.

Regards
-steve

>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][neutron] ZUUL_BRANCH not set for periodic stable jobs

2015-11-09 Thread Jeremy Stanley
On 2015-11-09 17:31:00 +0100 (+0100), Ihar Hrachyshka wrote:
[...]
> From the failure log, I determined that the tests fail because they assume
> neutron/liberty code, but actually run against neutron/master (that does not
> have that neutron.plugins.embrane.* namespace because the plugin was removed
> in Mitaka).
> 
> I then compared how we fetch neutron in gate and in periodic jobs, and I see
> that ZUUL branch is not set in the latter jobs.
[...]

Short answer is that the periodic trigger in Zuul is changeless and
thus branchless. It just wakes up at the specified time and starts a
list of jobs associated with that pipeline for any projects. This is
why the working periodic jobs have different names than their gerrit
triggered pipeline equivalents... they need to hard-code a branch
(usually as a JJB parameter).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-09 Thread Carl Baldwin
On Fri, Nov 6, 2015 at 2:59 PM, Shraddha Pandhe  wrote:
>
> We have a similar requirement where we want to pick a network thats
> accessible in the rack that VM belongs to. We have L3 Top-of-rack, so the
> network is confined to the rack. Right now, we are achieving this by naming
> physical network name in a certain way, but thats not going to scale.
>
> We also want to be able to make scheduling decisions based on IP
> availability. So we need to know rack <-> network <-> mapping.  We can't
> embed all factors in a name. It will be impossible to make scheduling
> decisions by parsing name and comparing. GoDaddy has also been doing
> something similar [1], [2].
>

This is precisely the use case that the large deployers team (LDT) has
brought to Neutron [1].  In fact, GoDaddy has been at the forefront of that
request.  We've had discussions about this since just after Vancouver on
the ML.  I've put up several specs to address it [2] and I'm working
another revision of it.  My take on it is that Neutron needs a model for a
layer 3 network (IpNetwork) which would group the rack networks.  The
IpNetwork would be visible to the end user and there will be a network <->
host mapping.  I am still aiming to have working code for this in Mitaka.
I discussed this with the LDT in Tokyo and they seemed to agree.  We had a
session on this in the Neutron design track [3][4] though that discussion
didn't produce anything actionable.

Solving this problem at the IPAM level has come up in discussion but I
don't have any references for that.  It is something that I'm still
considering but I haven't worked out all of the details for how this can
work in a portable way.  Could you describe how you imagine how this flow
would work from a user's perspective?  Specifically, when a user wants to
boot a VM, what precise API calls would be made to achieve this on your
network and how where would the IPAM data come in to play?

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1458890
[2] https://review.openstack.org/#/c/225384/
[3] https://etherpad.openstack.org/p/mitaka-neutron-next-network-model
[4] https://www.openstack.org/summit/tokyo-2015/schedule/design-summit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Performance Team summit session results

2015-11-09 Thread Dina Belova
Mark,

yes, sorry for not mentioning it here. it's 3PM - 4PM UTC time zone.

Cheers,
Dina

On Tue, Nov 10, 2015 at 2:51 AM, Mark Wagner  wrote:

>
> For clarification, this is 3-4 PM (15:00 - 16:00) UTC, correct ?.
>
> -mark
>
> - Original Message -
> > From: "Dina Belova" 
> > To: "Matt Riedemann" 
> > Cc: "OpenStack Development Mailing List" <
> openstack-dev@lists.openstack.org>,
> openstack-operat...@lists.openstack.org
> > Sent: Monday, November 9, 2015 4:30:55 AM
> > Subject: Re: [Openstack-operators] [openstack-dev] Performance Team
> summit session results
> >
> > Folks,
> >
> > due to the doodle  3:00 -
> > 4:00 UTC Tuesdays (starting from tomorrow) is ok for all voted people.
> > Although for the US folks with PST time zone it'll be very early due to
> the
> > time zone change happened for US on November, 1st. Still hope seeing you
> > there on *#openstack-performance* channel :)
> >
> > I've created primary wiki pages for the team
> >  and its meetings
> >  - please feel
> free
> > to add more items to the agenda.
> >
> > See you tomorrow :)
> >
> > Cheers,
> > Dina
> >
> >
> > On Mon, Nov 9, 2015 at 5:38 PM, Dina Belova 
> wrote:
> >
> > > Matt,
> > >
> > > thank you so much for covering [1], [2] and [3] points - I'll ping
> folks
> > > who've written these lines directly and will try to find out the
> answers.
> > >
> > > Cheers,
> > > Dina
> > >
> > > On Fri, Oct 30, 2015 at 1:42 AM, Matt Riedemann <
> > > mrie...@linux.vnet.ibm.com> wrote:
> > >
> > >>
> > >>
> > >> On 10/29/2015 10:55 AM, Matt Riedemann wrote:
> > >>
> > >>>
> > >>>
> > >>> On 10/29/2015 9:30 AM, Dina Belova wrote:
> > >>>
> >  Hey folks!
> > 
> >  On Tuesday we had great summit session about performance team
> kick-off
> >  and yesterday it was a great LDT session as well and I’m really
> glad to
> >  see how much does the OpenStack performance topic is important for
> all
> >  of us. 40 minutes session surely was not enough to analyse
> everyone’s
> >  feedback and bottlenecks people usually see, so I’ll try to finalise
> >  what have been discussed and the next steps in this email.
> > 
> >  Performance team kick-off session
> >  (
> > 
> https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off
> >  )
> > 
> >  can be shortly described with the following points:
> > 
> >    * IBM, Intel, HP, Mirantis, Rackspace, Red Hat, Yahoo! and others
> were
> >  taking part in the session
> >    * Various tools are used right now for OpenStack benchmarking and
> >  profiling right now:
> >    o Rally (IBM, HP, Mirantis, Yahoo!)
> >    o Shaker (Mirantis, merging its functionality to Rally right
> now)
> >    o Gatling (Rackspace)
> >    o Zipkin (Yahoo!)
> >    o JMeter (Yandex)
> >    o and others…
> >    * Various issues have been seen during the OpenStack cloud
> operating
> >  (full list can be found here -
> >  https://etherpad.openstack.org/p/openstack-performance-issues).
> >  Most
> >  mentioned issues were the following:
> >    o performance of DB-related layers (DB itself and oslo.db) -
> it is
> >  about 7 abstraction DB layers in Nova; performance of Nova
> >  conductor was mentioned several times
> >    o performance of MQ-related layers (MQ itself and
> oslo.messaging)
> >    * Different companies are using different standards for
> performance
> >  benchmarking (both control plane and data plane testing)
> >    * The most wished output from the team due to the comments will
> be:
> >    o agree on the “performance testing standard”, including
> answers
> >  on the following questions:
> >    + what tools need to be used for OpenStack performance
> >  benchmarking?
> >    + what benchmarking meters need to be covered? what we
> would
> >  like to compare?
> >    + what scenarios need to be covered?
> >    + how can we compare performance of different cloud
> >  deployments?
> >    + what performance deployment patterns can be used for
> various
> >  workloads?
> >    o share test plans and perform benchmarking tests
> >    o create methodologies and documentation about best OpenStack
> >  deployment and performance testing practices
> > 
> > 
> >  We’re going to cover all these topics further. First of all IRC
> channel
> >  for the discussions was created: *#openstack-performance*. We’re
> going
> >  to have weekly meeting related to current progress 

Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-09 Thread Richard Raseley
From this operator’s perspective this is exactly the element of community 
culture that, by encouraging the proliferation of projects and tools, is making 
the OpenStack landscape more complex and less {user,operator,architect,business 
decision maker} friendly.

In my opinion, it is essentially a manufactured and completely unnecessary 
distinction. I look forward to the day when, through some yet to be known 
mechanism, we have have a more focused product perspective within the community.

> On Nov 9, 2015, at 10:11 AM, Tim Hinrichs  wrote:
> 
> They shouldn't be combined because they can each be used without the other.  
> That is, they each stand on their own.
> 
> Congress can be used for monitoring or delegating policy without attempting 
> to correct violations (i.e. without needing workflows).
> 
> Mistral can be used to make complex changes without writing a policy.
> 
> Tim
> 
> 
> 
> 
> 
> On Mon, Nov 9, 2015 at 8:57 AM Adam Young  wrote:
> On 11/09/2015 10:57 AM, Tim Hinrichs wrote:
>> Congress happens to have the capability to run a script/API call under 
>> arbitrary conditions on the state of other OpenStack projects, which sounded 
>> like what you wanted.  Or did I misread your original question?
>> 
>> Congress and Mistral are definitely not competing.Congress lets people 
>> declare which states of the other OpenStack projects are permitted using a 
>> general purpose policy language, but it does not try to make complex changes 
>> (often requiring a workflow) to eliminate prohibited states.  Mistral lets 
>> people create a workflow that makes complex changes to other OpenStack 
>> projects, but it doesn't have a general purpose policy language that 
>> describes which states are permitted.  Congress and Mistral are 
>> complementary, and each can stand on its own.
> 
> And why should not these two things be in a single project?
> 
> 
> 
>> 
>> Tim
>> 
>> 
>> On Mon, Nov 9, 2015 at 6:46 AM Adam Young  wrote:
>> On 11/06/2015 06:28 PM, Tim Hinrichs wrote:
>>> Congress allows users to write a policy that executes an action under 
>>> certain conditions.
>>> 
>>> The conditions can be based on any data Congress has access to, which 
>>> includes nova servers, neutron networks, cinder storage, keystone users, 
>>> etc.  We also have some Ceilometer statistics; I'm not sure about whether 
>>> it's easy to get the Keystone notifications that you're talking about 
>>> today, but notifications are on our roadmap.  If the user's login is 
>>> reflected in the Keystone API, we may already be getting that event.
>>> 
>>> The action could in theory be a mistral/heat API or an arbitrary script.  
>>> Right now we're set up to invoke any method on any of the python-clients 
>>> we've integrated with.  We've got an integration with heat but not mistral. 
>>>  New integrations are typically easy.
>> 
>> Sounds like Mistral and Congress are competing here, then.  Maybe we should 
>> merge those efforts.
>> 
>> 
>>> 
>>> Happy to talk more.
>>> 
>>> Tim
>>> 
>>> 
>>> 
>>> On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann  wrote:
>>> Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
>>> > On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann  
>>> > wrote:
>>> >
>>> > > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
>>> > > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
>>> > > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
>>> > > > > > Can people help me work through the right set of tools for this 
>>> > > > > > use
>>> > > case
>>> > > > > > (has come up from several Operators) and map out a plan to 
>>> > > > > > implement
>>> > > it:
>>> > > > > >
>>> > > > > > Large cloud with many users coming from multiple Federation 
>>> > > > > > sources
>>> > > has
>>> > > > > > a policy of providing a minimal setup for each user upon first 
>>> > > > > > visit
>>> > > to
>>> > > > > > the cloud:  Create a project for the user with a minimal quota, 
>>> > > > > > and
>>> > > > > > provide them a role assignment.
>>> > > > > >
>>> > > > > > Here are the gaps, as I see it:
>>> > > > > >
>>> > > > > > 1.  Keystone provides a notification that a user has logged in, 
>>> > > > > > but
>>> > > > > > there is nothing capable of executing on this notification at the
>>> > > > > > moment.  Only Ceilometer listens to Keystone notifications.
>>> > > > > >
>>> > > > > > 2.  Keystone does not have a workflow engine, and should not be
>>> > > > > > auto-creating projects.  This is something that should be 
>>> > > > > > performed
>>> > > via
>>> > > > > > a Heat template, and Keystone does not know about Heat, nor should
>>> > > it.
>>> > > > > >
>>> > > > > > 3.  The Mapping code is pretty static; it assumes a user entry or 
>>> > > > > > a
>>> > > > > > group entry in identity when creating a role assignment, and 
>>> > > > > > neither
>>> > > > > > 

Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-09 Thread Zhenyu Zheng
Hi, thanks all for replying, sorry I might be a bit unclear.

We have user demands that we only change the root device of an
volume-backed instance for upper layer services. It's not cloudy but it is
quite common. And changing OS is another demand that sort of related to
this.

Cinder supports live-backup volume, but not support live-restore a volume.

Are we planning to support this kind of action?

Yours,
Zheng

On Mon, Nov 9, 2015 at 8:24 PM, Duncan Thomas 
wrote:

> On 9 November 2015 at 09:04, Zhenyu Zheng 
> wrote:
>
>>  And Nova side also doesn't support detaching root device, that means we
>> cannot performing volume backup/restore from cinder side, because those
>> actions needs the volume in "available" status.
>>
>>
>
> It might be of interest to note that volume snapshots have always worked
> on attached volumes, and as of liberty, the backup operation now supports a
> --force=True option that does a backup of a live volume (via an internal
> snapshot, so it should be crash consistent)
>
>
> --
> --
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-09 Thread Markus Zoeller
Kashyap Chamarthy  wrote on 11/06/2015 06:37:08 PM:

> From: Kashyap Chamarthy 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/06/2015 06:37 PM
> Subject: Re: [openstack-dev] [nova][bugs] Weekly Status Report
> 
> On Fri, Nov 06, 2015 at 05:54:59PM +0100, Markus Zoeller wrote:
> > Hey folks,
> > 
> > below is the first report of bug stats I intend to post weekly.
> > We discussed it shortly during the Mitaka summit that this report
> > could be useful to keep the attention of the open bugs at a certain
> > level. Let me know if you think it's missing something.
> 
> Nice.  Thanks for this super useful report (especially the queries)!
> 
> For cadence, I feel a week flies by too quickly, which is likely to
> cause people to train their muscle memory to mark these emails as read.
> Maybe bi-weekly?
> -- 
> /kashyap

How about having it weekly until the Mitaka-2 milestone in January
and see how this works? If it is perveived as more noise on the ML
then I can change it to bi-weekly.

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How can I contribute to the scheduler codebase?

2015-11-09 Thread Sylvain Bauza

Hi,

During the last Nova scheduler meeting (held every Mondays 1400UTC on 
#openstack-meeting-alt), we identified some on-going effort that could 
possibly be addressed by anyone wanting to step in. For the moment, we 
are still polishing the last bits of agreement, but those blueprints 
should be splitted into small actional items that could be seen as 
low-hanging-fruits.


Given those tasks require a bit of context understanding, the best way 
to consider joining us to join the Nova scheduler weekly meeting (see 
above the timing) and join our team. We'll try to provide you a bit of 
guidance and explanations whenever needed so that you could get some 
work assigned to you.


From an overall point of view, you can still get many ways to begin 
your Nova journey by reading 
https://wiki.openstack.org/wiki/Nova/Mentoring#What_should_I_work_on.3F


HTH,
-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc][all][osprofiler] OSprofiler is dead, long live OSprofiler

2015-11-09 Thread Joshua Harlow
+1 from me (although I've already contributed to osprofiler so my vote
might not count, ha). Anyway, people can poke me as well if they have
any questions about osprofiler and boris isn't around. I'm happy to
answer any questions as well...

Thanks boris for getting this rolling again...

Another question, do we need to talk with the people doing the
request_id integration into clients to make sure that the trace_id (or
whatever its called) is returned to clients...

I believe that was one of the original complaints/questions, is that
with osprofiler there is now 2 trace like headers
(https://github.com/openstack/osprofiler/blob/master/osprofiler/web.py#L36)
and such being sent around, should we nail that down now and put that
complaint/question to bed?

-Josh

On Mon, Nov 9, 2015, at 02:57 AM, Boris Pavlovic wrote:
> Hi stackers,
> 
> Intro
> ---
> 
> It's not a big secret that OpenStack is huge and complicated ecosystem of 
> different
> services that are working together to implement OpenStack API. 
> 
> For example booting VM is going through many projects and services: nova-api, 
> nova-scheduler, nova-compute, glance-api, glance-registry, keystone, 
> cinder-api, neutron-api... and many others. 
> 
> The question is how to understand what part of the request takes the most of 
> the time and should be improved. It's especially interested to get such data 
> under the load. 
> 
> To make it simple, I wrote OSProfiler which is tiny library that should be 
> added to all OpenStack 
> projects to create cross project/service tracer/profiler. 
> 
> Demo (trace of CLI command: nova boot) can be found here: 
> http://boris-42.github.io/ngk.html
> 
> This library is very simple. For those who wants to know how it works and how 
> it's integrated with OpenStack take a look here: 
> https://github.com/openstack/osprofiler/blob/master/README.rst
> 
> What is the current status? 
> ---
> 
> Good news: 
> - OSprofiler is mostly done 
> - OSprofiler is integrated with Cinder, Glance, Trove & Ceilometer 
> 
> Bad news: 
> - OSprofiler is not integrated in a lot of important projects: Keystone, 
> Nova, Neutron 
> - OSprofiler can use only Ceilometer + oslo.messaging as a backend 
> - OSprofiler stores part of arguments in api-paste.ini part in project.conf 
> which is terrible thing
> - There is no DSVM job that check that changes in OSprofiler don't break the 
> projects that are using it 
> - It's hard to enable OSprofiler in DevStack
> 
> Good news: 
> I spend some time and made 4 specs that should address most of issues: 
> https://github.com/openstack/osprofiler/tree/master/doc/specs
> 
> Let's make it happen in Mitaka!
> 
> Thoughts?
> By the way somebody would like to join this effort?) 
> 
> Best regards,
> Boris Pavlovic 
> 
> 
> _
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-09 Thread Tim Hinrichs
Congress happens to have the capability to run a script/API call under
arbitrary conditions on the state of other OpenStack projects, which
sounded like what you wanted.  Or did I misread your original question?

Congress and Mistral are definitely not competing.Congress lets people
declare which states of the other OpenStack projects are permitted using a
general purpose policy language, but it does not try to make complex
changes (often requiring a workflow) to eliminate prohibited states.
Mistral lets people create a workflow that makes complex changes to other
OpenStack projects, but it doesn't have a general purpose policy language
that describes which states are permitted.  Congress and Mistral are
complementary, and each can stand on its own.

Tim


On Mon, Nov 9, 2015 at 6:46 AM Adam Young  wrote:

> On 11/06/2015 06:28 PM, Tim Hinrichs wrote:
>
> Congress allows users to write a policy that executes an action under
> certain conditions.
>
> The conditions can be based on any data Congress has access to, which
> includes nova servers, neutron networks, cinder storage, keystone users,
> etc.  We also have some Ceilometer statistics; I'm not sure about whether
> it's easy to get the Keystone notifications that you're talking about
> today, but notifications are on our roadmap.  If the user's login is
> reflected in the Keystone API, we may already be getting that event.
>
> The action could in theory be a mistral/heat API or an arbitrary script.
> Right now we're set up to invoke any method on any of the python-clients
> we've integrated with.  We've got an integration with heat but not
> mistral.  New integrations are typically easy.
>
>
> Sounds like Mistral and Congress are competing here, then.  Maybe we
> should merge those efforts.
>
>
>
> Happy to talk more.
>
> Tim
>
>
>
> On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann 
> wrote:
>
>> Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
>> > On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann 
>> wrote:
>> >
>> > > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
>> > > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
>> > > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
>> > > > > > Can people help me work through the right set of tools for this
>> use
>> > > case
>> > > > > > (has come up from several Operators) and map out a plan to
>> implement
>> > > it:
>> > > > > >
>> > > > > > Large cloud with many users coming from multiple Federation
>> sources
>> > > has
>> > > > > > a policy of providing a minimal setup for each user upon first
>> visit
>> > > to
>> > > > > > the cloud:  Create a project for the user with a minimal quota,
>> and
>> > > > > > provide them a role assignment.
>> > > > > >
>> > > > > > Here are the gaps, as I see it:
>> > > > > >
>> > > > > > 1.  Keystone provides a notification that a user has logged in,
>> but
>> > > > > > there is nothing capable of executing on this notification at
>> the
>> > > > > > moment.  Only Ceilometer listens to Keystone notifications.
>> > > > > >
>> > > > > > 2.  Keystone does not have a workflow engine, and should not be
>> > > > > > auto-creating projects.  This is something that should be
>> performed
>> > > via
>> > > > > > a Heat template, and Keystone does not know about Heat, nor
>> should
>> > > it.
>> > > > > >
>> > > > > > 3.  The Mapping code is pretty static; it assumes a user entry
>> or a
>> > > > > > group entry in identity when creating a role assignment, and
>> neither
>> > > > > > will exist.
>> > > > > >
>> > > > > > We can assume a special domain for Federated users to have
>> per-user
>> > > > > > projects.
>> > > > > >
>> > > > > > So; lets assume a Heat Template that does the following:
>> > > > > >
>> > > > > > 1. Creates a user in the per-user-projects domain
>> > > > > > 2. Assigns a role to the Federated user in that project
>> > > > > > 3. Sets the minimal quota for the user
>> > > > > > 4. Somehow notifies the user that the project has been set up.
>> > > > > >
>> > > > > > This last probably assumes an email address from the Federated
>> > > > > > assertion.  Otherwise, the user hits Horizon, gets a "not
>> > > authenticated
>> > > > > > for any projects" error, and is stumped.
>> > > > > >
>> > > > > > How is quota assignment done in the other projects now?  What
>> happens
>> > > > > > when a project is created in Keystone?  Does that information
>> gets
>> > > > > > transferred to the other services, and, if so, how?  Do most
>> people
>> > > use
>> > > > > > a custom provisioning tool for this workflow?
>> > > > > >
>> > > > >
>> > > > > I know at Dreamhost we built some custom integration that was
>> triggered
>> > > > > when someone turned on the Dreamcompute service in their account
>> in our
>> > > > > existing user management system. That integration created the
>> account
>> > > in
>> > > > > keystone, set up a 

Re: [openstack-dev] [Heat] Admin operations on all tenants

2015-11-09 Thread Steven Hardy
On Mon, Nov 09, 2015 at 11:00:10AM +, Bruno Bompastor wrote:
> Hello,
> 
> I was looking to enable admin operations for heat stacks on all tenants. This 
> is useful to do support operations and debug stacks owned by different users.
> 
> I came across the “heat stack-list -g” command that allows to see all stacks 
> but after that is not possible to “heat stack-show” or “heat template-show” 
> based on ID (even if you allow that operation for admins on the policy.json).
> 
> Does anyone has a solution for this? Is it even possible?

Currently it's not possible, stack-list -g is the only "global" API
supported, and that is disabled by default in the policy.json.

This was discussed in the operator feedback session at summit, ref this
bug:

https://bugs.launchpad.net/heat/+bug/1466694

It sounds like there is a desire to see a general solution to this, but so
far we've resisted embracing the "global admin" concept, because it seemed
like several other projects made assumptions relating to the scope of the
admin role, ref https://bugs.launchpad.net/keystone/+bug/968696

I'll triage the heat bug mentioned above and we can continue discussion
there.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-09 Thread Kashyap Chamarthy
On Mon, Nov 09, 2015 at 04:55:12PM +0100, Markus Zoeller wrote:
> Kashyap Chamarthy  wrote on 11/06/2015 06:37:08 PM:
> 
> > From: Kashyap Chamarthy 
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Date: 11/06/2015 06:37 PM
> > Subject: Re: [openstack-dev] [nova][bugs] Weekly Status Report
> > 
> > On Fri, Nov 06, 2015 at 05:54:59PM +0100, Markus Zoeller wrote:
> > > Hey folks,
> > > 
> > > below is the first report of bug stats I intend to post weekly.
> > > We discussed it shortly during the Mitaka summit that this report
> > > could be useful to keep the attention of the open bugs at a certain
> > > level. Let me know if you think it's missing something.
> > 
> > Nice.  Thanks for this super useful report (especially the queries)!
> > 
> > For cadence, I feel a week flies by too quickly, which is likely to
> > cause people to train their muscle memory to mark these emails as read.
> > Maybe bi-weekly?
> 
> How about having it weekly until the Mitaka-2 milestone in January
> and see how this works? 

Sure, sounds good.

> If it is perveived as more noise on the ML then I can change it to
> bi-weekly.



-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 11/09/2015

2015-11-09 Thread Renat Akhmerov
Thanks for joining us today!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-11-09-16.17.html
 

Meeting log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-11-09-16.17.log.html
 


The full meeting archive can be found by the link at 
https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
 (see bottom section 
“Previous meetings”).

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-09 Thread Markus Zoeller
Diana Clarke  wrote on 11/06/2015 09:54:03 
PM:

> From: Diana Clarke 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/06/2015 09:54 PM
> Subject: Re: [openstack-dev] [nova][bugs] Weekly Status Report
> 
> On Fri, Nov 6, 2015 at 11:54 AM, Markus Zoeller  
wrote:
> > below is the first report of bug stats I intend to post weekly.
> > We discussed it shortly during the Mitaka summit that this report
> > could be useful to keep the attention of the open bugs at a certain
> > level. Let me know if you think it's missing something.
> 
> Thanks Markus!
> 
> On the topic of triaging bugs, I'd love to see more of the Nova bugs
> tagged with low-hanging-fruit (when appropriate) for those of us
> looking for new-contributor-friendly bugs to work on.
> 
> https://bugs.launchpad.net/nova/+bugs?field.tag=low-hanging-fruit
> 
> Thanks again & have a great weekend!
> 
> Cheers,
> 
> --diana

True, that's a useful way of onboarding new contributors. Unfortunately
as soon as a bug is tagged with it, someone grabbed it already.
And the old bugs (> 1 year) are probably not "low hanging". I just can
encourage that the people who triage the bugs in their area of expertise
try to check if this is a "new-contributor-friendly" bug and leave it
open for them. 
You could also have a look at the bugs in the "Triaged" state. They
should have enough information to fix the bug.

Regarding "non-bug" areas where you can contribute, there are also some
areas which called out explicitley for more help. I remember:
* the API team [1] and
* the notification/tasks area [2].
Maybe it makes sense to reach out to them and ask how you can help there.

[1] https://etherpad.openstack.org/p/mitaka-nova-api
[2] https://etherpad.openstack.org/p/mitaka-nova-error-handling

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Reminder: Team meeting on Monday at 2100 UTC

2015-11-09 Thread Carl Baldwin
I've been using Iceland's TZ for this.  Seems to work well and handle
the TZ changes nicely.

Carl

On Sat, Nov 7, 2015 at 7:24 AM, Sean M. Collins  wrote:
> Learn from my mistake, check your calendar for the timezone if you've
> created an event for the weekly meetings. Google makes it a hassle to
> set things in UTC time, so I was caught by surprise by the FwaaS meeting
> due to the DST change in the US of A.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Creating a CA for openstack-ansible deployments?

2015-11-09 Thread Clark, Robert Graham
> -Original Message-
> From: Adam Young [mailto:ayo...@redhat.com]
> Sent: 02 November 2015 20:54
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [openstack-ansible][security] Creating a CA for 
> openstack-ansible deployments?
> 
> On 10/26/2015 02:38 PM, Major Hayden wrote:
> > Hello there,
> >
> > I've been researching some additional ways to secure openstack-ansible 
> > deployments and I backed myself into a corner with secure log
> transport.  The rsyslog client requires a trusted CA certificate to be able 
> to send encrypted logs to rsyslog servers.  That's not a problem if
> users bring their own certificates, but it does become a problem if we use 
> the self-signed certificates that we're creating within the various
> roles.
> >
> > I'm wondering if we could create a role that creates a CA on the deployment 
> > host and then uses that CA to issue certificates for various
> services *if* a user doesn't specify that they want to bring their own 
> certificates.  We could build the CA very early in the installation
> process and then use it to sign certificates for each individual service.  
> That would allow to have some additional trust in environments
> where deployers don't choose to bring their own certificates.
> >
> > Does this approach make sense?
> >
> > --
> > Major Hayden
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> FreeIPA has a Dogtag server that can be your full CA.  I would recommend
> not rolling our own.
> 
> We have a playbook that does this here:
> https://github.com/admiyo/rippowam  specifically in the
> https://github.com/admiyo/rippowam/tree/master/roles/ipaserver  role
> 

Fundamentally everything is self-signed at its root. Who your systems trust 
depends on which certificates are installed in your system CA bundles,  which 
can be something Ansible takes care of or some other magic - you can distribute 
corporate CA certs or even specificly created local CA certs depending on your 
requirements.

A note on ephemeral certs - (Anchor or otherwise) the point here is that 
revocation works pretty poorly in Linux and especially poorly with the crypto 
libraries available to python today (CRL distribution is pretty 
non-deterministic at scale and OCSP just isn't supported) so in reality, your 
best mechanism could be to issue only short term certificates and replace them 
often, if you need to revoke a certificate you don't replace it. This technique 
is actually pretty CA agnostic, I imagine you could easily configure Dogtag, 
FreeIPA, EJBCA, ADCS etc to issue short life certificates.

The sticking point with short-life certificates is that the lifecycle is so 
short that when working at any sort of scale automated certificate signing is 
required, so robust policy management is required, automating a large part of 
what a traditional Registration Authority (RA) would do - which is a problem 
we've tried to solve with Anchor.

So I guess for any project you need to consider how/if revocation will work, in 
truth most of the time it doesn't, we pretend it does and carry on worrying 
about making sure we use publicly signed certificates.

I agree that running a local letsencrypt has a lot of promise here, it again 
looks at replacing a traditional RA with various proofs of possession or 
control (like control of specific DNS records etc).

There are ways to make most services play nice with ephemeral certificates - 
generally it involves front ending the service with a TLS terminator or LB - 
we've found through testing that they're generally pretty good at having 
certificates (and keys) swapped out from under them.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-09 Thread Adam Young

On 11/09/2015 09:46 AM, Thierry Carrez wrote:

Sean Dague wrote:

I do wonder what the cause of varying quality is in the distros. I do
understand that some distros aren't licensing the test suite. But they
are all building from the same upstream.

Except that they all use significant (and different) patchsets on top of
that "same upstream". Ubuntu for example currently carries 69 patches on
top of OpenJDK 7 source.

I can't speak explicitly for Ubuntu, but this is the norm for anything 
from a Distro;  they pick a version  for the LTS release and then choose 
the patches to continue to apply to that version. Red Hat does this with 
OpenStack, as well as OpenJDK and the Kernel.  The same is true of 
Python, too.


Java is a different language than Python.  I understand that many people 
dislike it for verbosity, its proprietary history, and so forth.  But 
with the OpenJDK, we have the same rules for releasing it as we do for 
Python, Ruby, Perl, Common Lisp, COBOL, Fortran, Ada, C++, and Go.  Hope 
to add Rust to that litany, soon, too.


I personally like Java, but feel like we should focus on limiting the 
number of languages we need to understand in order to Do OpenStack 
development.  I personally find Python annoying since I like type 
safety, but I've come to understand it.  The fact that Puppet already 
makes us jump to Ruby already makes it hard.  One reason I prefer the 
recent inroads by Ansible is that it lets me stick to a single language 
for the business logic.  It has nothing to do with the relative 
technical merits of Python versus Ruby.


For the most part, the third party tools we've had to integrate have 
been native apps, at least on the Keystone side;  LDAP, Database, and 
Memcache are all native for performance reasons.  The Dogpile 
abstraction would allow us to do Cassandra, but that never materialized.


As an example: I've been pointing out that we should be defaulting to  
Dogtag for Certificates.  Dogtag is a Java server app.  This is due to 
its long history as an OpenSource CA with very demanding deployments 
hardening it.  However, I don't think it should be the CA abstraction 
for OpenStack.  I would recommend a Native tool, Certmonger, with a 
mechanism that can be extended by Python.  This would allow for a native 
python implementation, or any other that actual deployments would chose 
to use, as the CA implementation.


Let's keep the toolchain understandable, but for the right reasons.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack]host not reachable with iptables reject after init

2015-11-09 Thread Brian Haley

On 11/09/2015 09:55 AM, Wilence Yao wrote:

Hi all,
After I run devstack/stack.sh completely, I found that api is not reachable.
After some check, I found some iptables rules cause the problem:





ACCEPT tcp  -- 0.0.0.0/0  0.0.0.0/0 
state NEW tcp dpt:22
REJECT all  -- 0.0.0.0/0  0.0.0.0/0 
reject-with icmp-host-prohibited
```

The last  two rules reject all access to the host except port 22(ssh). Why
should devstack add this two rules in host?


The devstack scripts don't add either of those rules, my guess is your distro 
has locked things down by default.  So you'll need to figure out how best to 
deal with it, either disabling completely or opening all the ports you'll need 
for devstack to function.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][infra][neutron] ZUUL_BRANCH not set for periodic stable jobs

2015-11-09 Thread Ihar Hrachyshka

Hi all,

I noticed that neutron-lbaas jobs for liberty fail when in periodic jobs:

http://logs.openstack.org/periodic-stable/periodic-neutron-lbaas-python27-liberty/3da452c/

But they don’t fail in that way when running in gate:

https://review.openstack.org/#/c/242534/

From the failure log, I determined that the tests fail because they assume  
neutron/liberty code, but actually run against neutron/master (that does  
not have that neutron.plugins.embrane.* namespace because the plugin was  
removed in Mitaka).


I then compared how we fetch neutron in gate and in periodic jobs, and I  
see that ZUUL branch is not set in the latter jobs.


For gate jobs, I see:

“””
INFO:zuul.Cloner:Creating repo openstack/neutron from cache  
file:///opt/git/openstack/neutron
INFO:zuul.Cloner:Updating origin remote in repo openstack/neutron to  
git://git.openstack.org/openstack/neutron

INFO:zuul.Cloner:upstream repo has branch stable/liberty
INFO:zuul.Cloner:Falling back to branch stable/liberty
INFO:zuul.Cloner:Prepared openstack/neutron repo with branch stable/liberty  
at commit 4f8e95b0eb84a3659d7f26eeb58425a754bd3606

“”"

But for periodic jobs, I see:

“””
INFO:zuul.Cloner:Creating repo openstack/neutron from cache  
file:///opt/git/openstack/neutron
INFO:zuul.Cloner:Updating origin remote in repo openstack/neutron to  
git://git.openstack.org/openstack/neutron

INFO:zuul.Cloner:upstream repo is missing branch None
INFO:zuul.Cloner:Falling back to branch master
INFO:zuul.Cloner:Prepared openstack/neutron repo with branch master at  
commit 669dcc41bb04b8c0e0b914d95b84321ecd44be69

“”"

(all snippets are from tox/py27-1.log files in log dirs.)

For lbaas/liberty, we fetch neutron code using the following code:

https://github.com/openstack/neutron-lbaas/blob/stable/liberty/tools/tox_install.sh#L24

Note that we *don’t* pass --branch or --zuul_branch as an argument to  
zuul-cloner. I guess if we would add the argument there, it would correctly  
fetch neutron/liberty for us, and everything would work.


Now, before I go to neutron-lbaas and other neutron repos that use similar  
approach to fetch neutron code and fix it with explicit branch argument, I  
wonder whether a better fix would be to actually set ZUUL_BRANCH for those  
periodic jobs, making them more in line with gate.


So, I have several questions:
- is there any technical reason not to pass the envvar for those jobs?
- if not, then where I can enforce it in infra repos? [I tried to locate  
the proper place myself, but apparently my jjb-fu is not good enough.]


Thanks
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-09 Thread Markus Zoeller
John Garbutt  wrote on 11/09/2015 11:10:55 AM:

> From: John Garbutt 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/09/2015 11:11 AM
> Subject: Re: [openstack-dev] [nova][bugs] Weekly Status Report
> 
> On 6 November 2015 at 21:39, Augustina Ragwitz  
wrote:
> > I totally second that on the low-hanging-fruit tag! And I'm happy to 
help
> > with some of the triaging. I'm still pretty new to Nova so I'm not 
sure how
> > helpful I'll be but it seems like a good task for getting to know more 
about
> > things at least from a high level.
> >
> > Would bug triaging be a meta low-hanging-fruit item? ;)
> 
> Would be good to track our list of low-hanging-fruit bugs.

I think the launchpad query for the tag is the way to track this, right?
https://bugs.launchpad.net/nova/+bugs?field.tag=low-hanging-fruit

> Off topic slightly, but we have an etherpad with some ideas, see here:
> https://wiki.openstack.org/wiki/Nova/Mentoring#What_should_I_work_on.3F
> 
> Thanks,
> johnthetubaguy

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-09 Thread Thierry Carrez
Hi everyone,

A few cycles ago we set up the Release Cycle Management team which was a
bit of a frankenteam of the things I happened to be leading: release
management, stable branch maintenance and vulnerability management.
While you could argue that there was some overlap between those
functions (as in, "all these things need to be released") logic was not
the primary reason they were put together.

When the Security Team was created, the VMT was spinned out of the
Release Cycle Management team and joined there. Now I think we should
spin out stable branch maintenance as well:

* A good chunk of the stable team work used to be stable point release
management, but as of stable/liberty this is now done by the release
management team and triggered by the project-specific stable maintenance
teams, so there is no more overlap in tooling used there

* Following the kilo reform, the stable team is now focused on defining
and enforcing a common stable branch policy[1], rather than approving
every patch. Being more visible and having more dedicated members can
only help in that very specific mission

* The release team is now headed by Doug Hellmann, who is focused on
release management and does not have the history I had with stable
branch policy. So it might be the right moment to refocus release
management solely on release management and get the stable team its own
leadership

* Empowering that team to make its own decisions, giving it more
visibility and recognition will hopefully lead to more resources being
dedicated to it

* If the team expands, it could finally own stable branch health and
gate fixing. If that ends up all falling under the same roof, that team
could make decisions on support timeframes as well, since it will be the
primary resource to make that work

So.. good idea ? bad idea ? What do current stable-maint-core[2] members
think of that ? Who thinks they could step up to lead that team ?

[1] http://docs.openstack.org/project-team-guide/stable-branches.html
[2] https://review.openstack.org/#/admin/groups/530,members

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-09 Thread Markus Zoeller
Augustina Ragwitz  wrote on 11/06/2015 10:39:17 PM:

> From: Augustina Ragwitz 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/06/2015 10:39 PM
> Subject: Re: [openstack-dev] [nova][bugs] Weekly Status Report
> 
> I totally second that on the low-hanging-fruit tag! And I'm happy to 
> help with some of the triaging. I'm still pretty new to Nova so I'm not 
> sure how helpful I'll be but it seems like a good task for getting to 
> know more about things at least from a high level.

Yes, confirming/invalidating new bugs is totally helpful to get to know
another segment of the wide range of Nova's functionality IMO. It takes
time to do it and you also learn pretty fast which information would
be useful in case you will report a bug by yourself :)

> Would bug triaging be a meta low-hanging-fruit item? ;)

To be honest, I don't think it's easy. You need to know a lot of the
code to understand the bug but you need to fix some bugs to deeply 
understand the code base... you see the cycle here.

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-09 Thread Adam Young

On 11/09/2015 10:57 AM, Tim Hinrichs wrote:
Congress happens to have the capability to run a script/API call under 
arbitrary conditions on the state of other OpenStack projects, which 
sounded like what you wanted.  Or did I misread your original question?


Congress and Mistral are definitely not competing. Congress lets 
people declare which states of the other OpenStack projects are 
permitted using a general purpose policy language, but it does not try 
to make complex changes (often requiring a workflow) to eliminate 
prohibited states.  Mistral lets people create a workflow that makes 
complex changes to other OpenStack projects, but it doesn't have a 
general purpose policy language that describes which states are 
permitted. Congress and Mistral are complementary, and each can stand 
on its own.


And why should not these two things be in a single project?




Tim


On Mon, Nov 9, 2015 at 6:46 AM Adam Young > wrote:


On 11/06/2015 06:28 PM, Tim Hinrichs wrote:

Congress allows users to write a policy that executes an action
under certain conditions.

The conditions can be based on any data Congress has access to,
which includes nova servers, neutron networks, cinder storage,
keystone users, etc.  We also have some Ceilometer statistics;
I'm not sure about whether it's easy to get the Keystone
notifications that you're talking about today, but notifications
are on our roadmap.  If the user's login is reflected in the
Keystone API, we may already be getting that event.

The action could in theory be a mistral/heat API or an arbitrary
script.  Right now we're set up to invoke any method on any of
the python-clients we've integrated with.  We've got an
integration with heat but not mistral.  New integrations are
typically easy.


Sounds like Mistral and Congress are competing here, then.  Maybe
we should merge those efforts.




Happy to talk more.

Tim



On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann
> wrote:

Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28
-0600:
> On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann
> wrote:
>
> > Excerpts from Clint Byrum's message of 2015-11-05
10:09:49 -0800:
> > > Excerpts from Doug Hellmann's message of 2015-11-05
09:51:41 -0800:
> > > > Excerpts from Adam Young's message of 2015-11-05
12:34:12 -0500:
> > > > > Can people help me work through the right set of
tools for this use
> > case
> > > > > (has come up from several Operators) and map out a
plan to implement
> > it:
> > > > >
> > > > > Large cloud with many users coming from multiple
Federation sources
> > has
> > > > > a policy of providing a minimal setup for each user
upon first visit
> > to
> > > > > the cloud:  Create a project for the user with a
minimal quota, and
> > > > > provide them a role assignment.
> > > > >
> > > > > Here are the gaps, as I see it:
> > > > >
> > > > > 1.  Keystone provides a notification that a user
has logged in, but
> > > > > there is nothing capable of executing on this
notification at the
> > > > > moment.  Only Ceilometer listens to Keystone
notifications.
> > > > >
> > > > > 2.  Keystone does not have a workflow engine, and
should not be
> > > > > auto-creating projects.  This is something that
should be performed
> > via
> > > > > a Heat template, and Keystone does not know about
Heat, nor should
> > it.
> > > > >
> > > > > 3.  The Mapping code is pretty static; it assumes a
user entry or a
> > > > > group entry in identity when creating a role
assignment, and neither
> > > > > will exist.
> > > > >
> > > > > We can assume a special domain for Federated users
to have per-user
> > > > > projects.
> > > > >
> > > > > So; lets assume a Heat Template that does the
following:
> > > > >
> > > > > 1. Creates a user in the per-user-projects domain
> > > > > 2. Assigns a role to the Federated user in that project
> > > > > 3. Sets the minimal quota for the user
> > > > > 4. Somehow notifies the user that the project has
been set up.
> > > > >
> > > > > This last probably assumes an email address from
the Federated
> > > > > assertion.  Otherwise, the user hits Horizon, gets
a "not
> > authenticated
> > > > > for any projects" error, and is stumped.
> > > > >
> > 

Re: [openstack-dev] [Neutron] Reminder: Team meeting on Monday at 2100 UTC

2015-11-09 Thread Kevin Benton
There is also the "(GMT +00:00) GMT (no daylight savings)" entry, which
I've been using. Not sure if there is a difference between that and the
Reykjavik one.

On Mon, Nov 9, 2015 at 9:55 AM, Carl Baldwin  wrote:

> I've been using Iceland's TZ for this.  Seems to work well and handle
> the TZ changes nicely.
>
> Carl
>
> On Sat, Nov 7, 2015 at 7:24 AM, Sean M. Collins 
> wrote:
> > Learn from my mistake, check your calendar for the timezone if you've
> > created an event for the weekly meetings. Google makes it a hassle to
> > set things in UTC time, so I was caught by surprise by the FwaaS meeting
> > due to the DST change in the US of A.
> >
> > --
> > Sean M. Collins
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam status report

2015-11-09 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)

(diff with Oct 19)
- Open: 171 (+11). 12 new (+9), 63 in progress (+22), 0 critical, 14 high
(+3) and 10 incomplete (-1)
- Nova bugs with Ironic tag: 25. 1 new (-1), 0 critical, 0 high
- Inspector bugs: 15. 1 new (+1), 0 critical, 6 high (-1)

dtantsur is finally back and has to catch up with bug triaging


ironic-lib adoption (rloo)
==
patch for disk partitioner code to use ironic-lib:
https://review.openstack.org/#/c/184443/, rloo +deva not happy with configs
moved to ironic-lib


Nova Liaisons (jlvillal & mrda)
===
Conducted bug scrub and did reviews of proposed patches.


Testing/Quality (jlvillal/lekha/krtaylor)

- Had first kickoff of weekly meeting.
https://wiki.openstack.org/wiki/Meetings/Ironic-QA
- jlvillal has been working on making Grenade work. Running into a few
roadblocks. jroll has been providing some assistance.
- First roadblock is that the Tempest 'smoke' job fails for Ironic. One
test is failing.
- Functional testing patch merged for python-ironicclient. The functional
tests are now passing in python-ironicclient.
- CI Spec/blueprint started: https://review.openstack.org/#/c/241294/
- Etherpad for testing/CI etc revived
https://etherpad.openstack.org/p/IronicCI


Inspector (dtansur)
===
- Inspector is release:managed now
- autodiscovery is being discussed (again), please see the mailing list:
-
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078166.html
- We now have a neutron RFE to support DHCP opts per network/subnet instead
of just per port:
-
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078172.html
- https://bugs.launchpad.net/neutron/+bug/1512666


Bifrost (TheJulia)
=
- Revisions for inspection support have been posted and could use some
reviews.
- Hoping to release next version of bifrost once inspection support has
landed.


Drivers
==

iRMC (naohirot)
--
-
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z
- Status: Reactive (solicited for core team's review)
- New boot driver interface for iRMC drivers (bp/new-boot-interface)
- Enhance Power Interface for Soft Reboot and NMI
(bp/enhance-power-interface-for-soft-reboot-and-nmi)
- iRMC out of band inspection (bp/ironic-node-properties-discovery)

   -
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z
- Status: Active (Resumed from Nov. 9th after the summit)
- Enhance Power Interface for Soft Reboot and NMI
(bp/enhance-power-interface-for-soft-reboot-and-nmi)
- OOB rescue mode support (bp/oob-rescue-mode-support)
- Status: Reactive (solicited for core team's review)
- New boot driver interface for iRMC drivers (bp/new-boot-interface)
- iRMC out of band inspection (bp/ironic-node-properties-discovery)


As discussed in today's ironic meeting [1], we will include subteam reports
(if available) for the mitaka project priorities [2] in the future.

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
[1]
http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-11-09-17.00.log.html
[2]
http://specs.openstack.org/openstack/ironic-specs/priorities/mitaka-priorities.html#network-isolation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] distributing work using work items - call for participation in distributed blueprint development

2015-11-09 Thread Steven Dake (stdake)
Hey folks,

So far a whole slew of people have joined up to develop small bits of this 
blueprint!  Thanks for that commitment.  That said, there is still more work to 
be done - so please feel free to pick up 1 or 2 container sets.

The initial R for this blueprint has been completed after three separate 
implementation attempts.  A big thanks for Sam Yaple and Paul Bourke for 
putting up with me while I hammered out the right approach.  To see the base 
implementation, check out:

The base implementation is here:
https://review.openstack.org/#/c/242876/

To see how the base implementation was used with glance (the implementation to 
copy), check out:
https://review.openstack.org/#/c/242877/

The 242877 review should mostly be copied and pasted with a bit of brainpower 
to implement the securitization of the containers for other container sets.  
The ones that may not follow the above pattern is nova, neutron, horizon, and 
keystone because nova/neutron need to sudo to root via root wrap (it may or may 
not work as is) and horizon/keystone need the UIDs they currently run under 
(i.e. root + horizon + apache) merged into one (just horizon).

Thanks in advance for your contribution!  Lets get er done by Friday!

Regards
-steve


From: Steven Dake >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Thursday, November 5, 2015 at 6:18 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [kolla] distributing work using work items - call for 
participation in distributed blueprint development

HI folks,

Sam Yaple had suggested we try using Work Items to track our work rather then 
Etherpad for complex distributed tasks.  I've picked a pretty easy blueprint 
which should be mostly one line patches where everyone can chip in.  The work 
should be pretty easy, even for new contributors to the project - so please 
feel free to sign up for contributing work even if you are new to the project.  
If your unable to set your name in the work items field, ping sdake on irc to 
add you to the kolla-drivers group.

The blueprint is:
https://blueprints.launchpad.net/kolla/+spec/drop-root

The goal of the blueprint is to run the processes for each container as the 
correct UID instead of root (except for the case where the container requires 
root to do its job).  These are easy to pick out in the ansible files by the 
privileged: true flag.  The real goal of this blueprint is to test if this new 
work items workflow is faster and more effective then etherpad (while also 
delivering this essential security work for mitaka-1 (deadline December 4th).

Please take a moment to sign up for 1-4 container sets.  To do that, click the 
Yellow checkbox in the work items field in launchpad, and then replace the 
"unassigned" entry next to the work item with your irc nickname.  I'd like this 
work to finish as rapidly as possible, so please try to knock out the work by 
next Friday (November 13th).  Please try to complete the work if you assign 
yourself to the container set by November 13th.

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-09 Thread Fox, Kevin M
Dedicating 3 controller nodes in a small cloud is not the best allocation of 
resources sometimes.  Your thinking of medium to large clouds. Small production 
clouds are a thing too. and at that scale, a little downtime if you actually 
hit the rare case of a node failure on the controller may be acceptable. Its up 
for an OP to decide.

We've also experienced that sometimes HA software causes more, or longer 
downtimes then it solves sometimes. Due to its complexity, knowledge required, 
proper testing, etc. Again, the risk gets higher the smaller the cloud is in 
some ways.

Being able to keep it simple and small for that case, then scale with switching 
out pieces as needed does have some tangible benefits.

Thanks,
Kevin

From: Maish Saidel-Keesing [mais...@maishsk.com]
Sent: Monday, November 09, 2015 11:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager 
discussion @ the summit

On 11/05/15 23:18, Fox, Kevin M wrote:
> Your assuming there are only 2 choices,
>   zk or db+rabbit. I'm claiming both hare suboptimal at present. a 3rd might 
> be needed. Though even with its flaws, the db+rabbit choice has a few 
> benefits too.
>
> You also seem to assert that to support large clouds, the default must be 
> something that can scale that large. While that would be nice, I don't think 
> its a requirement if its overly burdensome on deployers of non huge clouds.
>
> I don't have metrics, but I would be surprised if most deployments today 
> (production + other) used 3 controllers with a full ha setup. I would guess 
> that the majority are single controller setups. With those, the
I think it would be safe to assume - that any kind of production cloud -
or any operator that considers their OpenStack environment something
that is close to production ready - would not be daft enough to deploy
their whole environment based on a single controller - which is a
whopper of a single point of failure.

Most Fuel (mirantis) deployments are multiple controllers.
RHOS also recommends doing multiple controllers.

I don't think that we as a community can afford to assume that 1
controller will suffice.
This does not say that maintaining zk will be any easier though.
> overhead of maintaining a whole dlm like zk seems like overkill. If db+rabbit 
> would work for that one case, that would be one less thing to have to setup 
> for an op. They already have to setup db+rabbit. Or even a clm plugin of some 
> sort, that won't scale, but would be very easy to deploy, and change out 
> later when needed would be very useful.
>
> etcd is starting to show up in a lot of other projects, and so it may be at 
> sites already. being able to support it may be less of a burden to operators 
> then zk in some cases.
>
> If your cloud grows to the point where the dlm choice really matters for 
> scalability/correctness, then you probably have enough staff members to deal 
> with adding in zk, and that's probably the right choice.
>
> You can have multiple suggested things in addition to one default. Default to 
> the thing that makes the most sense in the common most deployments, and make 
> specific recommendations for certain scenarios. like, "if greater then 100 
> nodes, we strongly recommend using zk" or something to that effect.
>
> Thanks,
> Kevin
>
>
> 
> From: Clint Byrum [cl...@fewbar.com]
> Sent: Thursday, November 05, 2015 11:44 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager  
> discussion @ the summit
>
> Excerpts from Fox, Kevin M's message of 2015-11-04 14:32:42 -0800:
>> To clarify that statement a little more,
>>
>> Speaking only for myself as an op, I don't want to support yet one more 
>> snowflake in a sea of snowflakes, that works differently then all the rest, 
>> without a very good reason.
>>
>> Java has its own set of issues associated with the JVM. Care, and feeding 
>> sorts of things. If we are to invest time/money/people in learning how to 
>> properly maintain it, its easier to justify if its not just a one off for 
>> just DLM,
>>
>> So I wouldn't go so far as to say we're vehemently opposed to java, just 
>> that DLM on its own is probably not a strong enough feature all on its own 
>> to justify requiring pulling in java. Its been only a very recent thing that 
>> you could convince folks that DLM was needed at all. So either make java 
>> optional, or find some other use cases that needs java badly enough that you 
>> can make java a required component. I suspect some day searchlight might be 
>> compelling enough for that, but not today.
>>
>> As for the default, the default should be good reference. if most sites 
>> would run with etc or something else since java isn't needed, then don't 
>> default zookeeper on.
>>
> There are a number of reasons, but the most important are:
>
> * 

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-09 Thread Shraddha Pandhe
Hi Carl,

Please find me reply inline


On Mon, Nov 9, 2015 at 9:49 AM, Carl Baldwin  wrote:

> On Fri, Nov 6, 2015 at 2:59 PM, Shraddha Pandhe <
> spandhe.openst...@gmail.com> wrote:
>>
>> We have a similar requirement where we want to pick a network thats
>> accessible in the rack that VM belongs to. We have L3 Top-of-rack, so the
>> network is confined to the rack. Right now, we are achieving this by naming
>> physical network name in a certain way, but thats not going to scale.
>>
>> We also want to be able to make scheduling decisions based on IP
>> availability. So we need to know rack <-> network <-> mapping.  We can't
>> embed all factors in a name. It will be impossible to make scheduling
>> decisions by parsing name and comparing. GoDaddy has also been doing
>> something similar [1], [2].
>>
>
> This is precisely the use case that the large deployers team (LDT) has
> brought to Neutron [1].  In fact, GoDaddy has been at the forefront of that
> request.  We've had discussions about this since just after Vancouver on
> the ML.  I've put up several specs to address it [2] and I'm working
> another revision of it.  My take on it is that Neutron needs a model for a
> layer 3 network (IpNetwork) which would group the rack networks.  The
> IpNetwork would be visible to the end user and there will be a network <->
> host mapping.  I am still aiming to have working code for this in Mitaka.
> I discussed this with the LDT in Tokyo and they seemed to agree.  We had a
> session on this in the Neutron design track [3][4] though that discussion
> didn't produce anything actionable.
>
> Thats great. L3 layer network model is definitely one of our most
important requirements. All our go-forward deployments are going to be L3.
So this is a big deal for us.


> Solving this problem at the IPAM level has come up in discussion but I
> don't have any references for that.  It is something that I'm still
> considering but I haven't worked out all of the details for how this can
> work in a portable way.  Could you describe how you imagine how this flow
> would work from a user's perspective?  Specifically, when a user wants to
> boot a VM, what precise API calls would be made to achieve this on your
> network and how where would the IPAM data come in to play?
>

Here's what the flow looks like to me.

1. User sends a boot request as usual. The user need not know all the
network and subnet information beforehand. All he would do is send a boot
request.

2. The scheduler will pick a node in an L3 rack. The way we map nodes <->
racks is as follows:
a. For VMs, we store rack_id in nova.conf on compute nodes
b. For Ironic nodes, right now we have static IP allocation, so we
practically know which IP we want to assign. But when we move to dynamic
allocation, we would probably use 'chassis' or 'driver_info' fields to
store the rack id.

3. Nova compute will try to pick a network ID for this instance.  At this
point, it needs to know what networks (or subnets) are available in this
rack. Based on that, it will pick a network ID and send port creation
request to Neutron. At Yahoo, to avoid some back-and-forth, we send a fake
network_id and let the plugin do all the work.

4. We need some information associated with the network/subnet that tells
us what rack it belongs to. Right now, for VMs, we have that information
embedded in physnet name. But we would like to move away from that. If we
had a column for subnets - e.g. tag, it would solve our problem. Ideally,
we would like a column 'rack id' or a new table 'racks' that maps to
subnets, or something. We are open to different ideas that work for
everyone. This is where IPAM can help.

5. We have another requirement where we want to store multiple gateway
addresses for a subnet, just like name servers.


We also have a requirement where we want to make scheduling decisions based
on IP availability. We want to allocate multiple IPs to the hosts. e.g. We
want to allocate X IPs to a host. The flow in that case would be

1. User sends a boot request with --num-ips X
The network/subnet level complexities need not be exposed to the user.
For better experience, all we want our users to tell us is the number of
IPs they want.

2. When the scheduler tries to find an appropriate host in L3 racks, we
want it to find a rack that can satisfy this IP requirement. So, the
scheduler will basically say, "give me all racks that have >X IPs
available". If we have a 'Racks' table in IPAM, that would help.
Once the scheduler gets a rack, it will apply remaining filters to
narrow down to one host and call nova-compute. The IP count will be
propagated to nova compute from scheduler.


3. Nova compute will call Neutron and send the node details and IP count
along. Neutron IPAM driver will then look at the node details, query the
database to find a network in that rack and allocate X IPs from the subnet.



> Carl
>
> [1] 

Re: [openstack-dev] [Neutron] Reminder: Team meeting on Monday at 2100 UTC

2015-11-09 Thread Russell Bryant
Where do you find this?  I've been using Iceland, but this sounds more
explicit.  For me it makes me pick a country and then a TZ, so I'm not
seeing the "GMT (no daylight saving)" option anywhere.

-- 
Russell

On 11/09/2015 02:39 PM, Ihar Hrachyshka wrote:
> There is also 'GMT (no daylight saving)’ TZ available in Google Calendar
> that is identical to UTC for all practical matters.
> 
> Ihar
> 
> Carl Baldwin  wrote:
> 
>> I've been using Iceland's TZ for this.  Seems to work well and handle
>> the TZ changes nicely.
>>
>> Carl
>>
>> On Sat, Nov 7, 2015 at 7:24 AM, Sean M. Collins 
>> wrote:
>>> Learn from my mistake, check your calendar for the timezone if you've
>>> created an event for the weekly meetings. Google makes it a hassle to
>>> set things in UTC time, so I was caught by surprise by the FwaaS meeting
>>> due to the DST change in the US of A.
>>>
>>> -- 
>>> Sean M. Collins
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-09 Thread Joshua Harlow

Adam Young wrote:

On 11/09/2015 09:46 AM, Thierry Carrez wrote:

Sean Dague wrote:

I do wonder what the cause of varying quality is in the distros. I do
understand that some distros aren't licensing the test suite. But they
are all building from the same upstream.

Except that they all use significant (and different) patchsets on top of
that "same upstream". Ubuntu for example currently carries 69 patches on
top of OpenJDK 7 source.


I can't speak explicitly for Ubuntu, but this is the norm for anything
from a Distro; they pick a version for the LTS release and then choose
the patches to continue to apply to that version. Red Hat does this with
OpenStack, as well as OpenJDK and the Kernel. The same is true of
Python, too.


+1

If you haven't looked at how many patches some RHEL packages have you 
might want to, 69 patches is nothing from what I have seen...


For example (rpm detailing program here):

https://gist.github.com/harlowja/9b443d1af13a57a48f4f

$ python rpm-info.py 
"http://vault.centos.org/6.5/os/Source/SPackages/libvirt-0.10.2-29.el6.src.rpm; 
http://vault.centos.org/6.5/os/Source/SPackages/qemu-kvm-0.12.1.2-2.415.el6.src.rpm


--
Getting details for 2 rpms
--
Downloading: 
http://vault.centos.org/6.5/os/Source/SPackages/libvirt-0.10.2-29.el6.src.rpm

[] 22378/22378 - 00:00:02
Downloading: 
http://vault.centos.org/6.5/os/Source/SPackages/qemu-kvm-0.12.1.2-2.415.el6.src.rpm

[] 9601/9601 - 00:00:01

-
Gathered info
-
+-+---+
| Filename|   Patches |
+=+===+
| libvirt-0.10.2-29.el6.src.rpm   |   538 |
+-+---+
| qemu-kvm-0.12.1.2-2.415.el6.src.rpm |  3497 |
+-+---+

IMHO at least java has a TCK, does libvirt have one or KVM..., because 
with the number of patches listed above, who knows exactly what version 
of libvirt or qemu is really running (the version number IMHO makes less 
sense when there are 3400+ patches).




Java is a different language than Python. I understand that many people
dislike it for verbosity, its proprietary history, and so forth. But
with the OpenJDK, we have the same rules for releasing it as we do for
Python, Ruby, Perl, Common Lisp, COBOL, Fortran, Ada, C++, and Go. Hope
to add Rust to that litany, soon, too.

I personally like Java, but feel like we should focus on limiting the
number of languages we need to understand in order to Do OpenStack
development. I personally find Python annoying since I like type safety,
but I've come to understand it. The fact that Puppet already makes us
jump to Ruby already makes it hard. One reason I prefer the recent
inroads by Ansible is that it lets me stick to a single language for the
business logic. It has nothing to do with the relative technical merits
of Python versus Ruby.

For the most part, the third party tools we've had to integrate have
been native apps, at least on the Keystone side; LDAP, Database, and
Memcache are all native for performance reasons. The Dogpile abstraction
would allow us to do Cassandra, but that never materialized.

As an example: I've been pointing out that we should be defaulting to
Dogtag for Certificates. Dogtag is a Java server app. This is due to its
long history as an OpenSource CA with very demanding deployments
hardening it. However, I don't think it should be the CA abstraction for
OpenStack. I would recommend a Native tool, Certmonger, with a mechanism
that can be extended by Python. This would allow for a native python
implementation, or any other that actual deployments would chose to use,
as the CA implementation.

Let's keep the toolchain understandable, but for the right reasons.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-09 Thread Joshua Harlow

Sean Dague wrote:

On 11/09/2015 06:05 AM, Thierry Carrez wrote:


So that is an important point. While there is "the Oracle JVM", there is
nothing like "OpenJDK". There are a number of OpenJDK builds by various
distros and they are all different (and of varying quality). The beast
is brittle, as anyone who has ever run the TCK on OpenJDK should be able
to tell you. The reason a lot of Java bugtrackers still start by asking
you to "reproduce on Oracle's JVM" is to eliminate that unknown, not
because OpenJDK is always bad.

My main objection about picking a Java solution was that we'd in effect
force our users into a non-free solution so that they eliminate that
unknown themselves. I guess as long as we are reasonably confident that
ZooKeeper behaves well with most OpenJDK implementations, and that there
are solid, well-known free software deployment options available, we
should be fine ?


I think that's where declaring this fact early is good. Hey distros,
this is going to need to work out of the box. Seems pretty reasonable
and a heads up that is going to ensure things need to function well down
the road.

I do wonder what the cause of varying quality is in the distros. I do
understand that some distros aren't licensing the test suite. But they
are all building from the same upstream.

I want to be really specific before we as a community spread FUD around
an effort like openjdk. Because it doesn't help us make decisions long
term if we're basing that on long standing biases that may or may not
still be supported by data.


+1 we should probably try to resolve our own communities FUD (which 
there is enough of) vs. creating more FUD for another community to deal 
with...




-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Reminder: Team meeting on Monday at 2100 UTC

2015-11-09 Thread Ihar Hrachyshka

GMT time zone is available by following the official instructions:

https://support.google.com/calendar/answer/37064?hl=en

"See other time zones in your calendar” section.

Ihar

Russell Bryant  wrote:


Where do you find this?  I've been using Iceland, but this sounds more
explicit.  For me it makes me pick a country and then a TZ, so I'm not
seeing the "GMT (no daylight saving)" option anywhere.

--
Russell

On 11/09/2015 02:39 PM, Ihar Hrachyshka wrote:

There is also 'GMT (no daylight saving)’ TZ available in Google Calendar
that is identical to UTC for all practical matters.

Ihar

Carl Baldwin  wrote:


I've been using Iceland's TZ for this.  Seems to work well and handle
the TZ changes nicely.

Carl

On Sat, Nov 7, 2015 at 7:24 AM, Sean M. Collins 
wrote:

Learn from my mistake, check your calendar for the timezone if you've
created an event for the weekly meetings. Google makes it a hassle to
set things in UTC time, so I was caught by surprise by the FwaaS meeting
due to the DST change in the US of A.

--
Sean M. Collins

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Creating puppet-keystone-core and proposing Richard Megginson core-reviewer

2015-11-09 Thread Ivan Chavero

congratulations Rich!!

- Mensaje original -
> De: "Emilien Macchi" 
> Para: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Enviados: Miércoles, 4 de Noviembre 2015 9:10:17
> Asunto: Re: [openstack-dev] [puppet] Creating puppet-keystone-core and 
> proposing Richard Megginson core-reviewer
> 
> 
> 
> On 11/03/2015 03:56 PM, Matt Fischer wrote:
> > Sorry I replied to this right away but used the wrong email address and
> > it bounced!
> > 
> >> I've appreciated all of richs v3 contributions to keystone. +1 from me.
> 
> 2 positives votes from our core-reviewer team.
> No negative vote at all.
> 
> I guess that's a 'yes', welcome Rich, you're the first
> puppet-keystone-core member!
> 
> Note: anyone else core-reviewer on Puppet modules is also core on
> puppet-keystone by the way.
> 
> Congrats Rich!
> 
> > On Tue, Nov 3, 2015 at 4:38 AM, Sofer Athlan-Guyot  > > wrote:
> > 
> > He's very good reviewer with a deep knowledge of keystone and puppet.
> > Thank you Richard for your help.
> > 
> > +1
> > 
> > Emilien Macchi  > > writes:
> > 
> > > At the Summit we discussed about scaling-up our team.
> > > We decided to investigate the creation of sub-groups specific to our
> > > modules that would have +2 power.
> > >
> > > I would like to start with puppet-keystone:
> > > https://review.openstack.org/240666
> > >
> > > And propose Richard Megginson part of this group.
> > >
> > > Rich is leading puppet-keystone work since our Juno cycle. Without
> > > his
> > > leadership and skills, I'm not sure we would have Keystone v3 support
> > > in our modules.
> > > He's a good Puppet reviewer and takes care of backward compatibility.
> > > He also has strong knowledge at how Keystone works. He's always
> > > willing to lead our roadmap regarding identity deployment in
> > > OpenStack.
> > >
> > > Having him on-board is for us an awesome opportunity to be ahead of
> > > other deployments tools and supports many features in Keystone that
> > > real deployments actually need.
> > >
> > > I would like to propose him part of the new puppet-keystone-core
> > > group.
> > >
> > > Thank you Rich for your work, which is very appreciated.
> > 
> > --
> > Sofer Athlan-Guyot
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> --
> Emilien Macchi
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-09 Thread Clint Byrum
Excerpts from Richard Raseley's message of 2015-11-09 10:34:26 -0800:
> From this operator’s perspective this is exactly the element of community 
> culture that, by encouraging the proliferation of projects and tools, is 
> making the OpenStack landscape more complex and less 
> {user,operator,architect,business decision maker} friendly.

I'd agree with you that looking at all of the "OpenStack projects" as
"OpenStack" will make you go cross-eyed.

However, I think if you start with the guides and documentation, that
is where the homogeneous, settled parts of the platform reside that some
people might call OpenStack's "product".

Once a thing starts to work well and is accepted by early adopters, it
gets fleshed out documentation, and there's usually not a large decision
tree presented in there by that time.

If, however, you have more aggressive needs, then I think you have to
build on top of that platform and look at the greater landscape with a
focus toward solving specific problems. For instance, autoprovisioning,
and accept that it may not be clear what way works well.

> 
> In my opinion, it is essentially a manufactured and completely unnecessary 
> distinction. I look forward to the day when, through some yet to be known 
> mechanism, we have have a more focused product perspective within the 
> community.
> 
> > On Nov 9, 2015, at 10:11 AM, Tim Hinrichs  wrote:
> > 
> > They shouldn't be combined because they can each be used without the other. 
> >  That is, they each stand on their own.
> > 
> > Congress can be used for monitoring or delegating policy without attempting 
> > to correct violations (i.e. without needing workflows).
> > 
> > Mistral can be used to make complex changes without writing a policy.
> > 
> > Tim
> > 
> > 
> > 
> > 
> > 
> > On Mon, Nov 9, 2015 at 8:57 AM Adam Young  wrote:
> > On 11/09/2015 10:57 AM, Tim Hinrichs wrote:
> >> Congress happens to have the capability to run a script/API call under 
> >> arbitrary conditions on the state of other OpenStack projects, which 
> >> sounded like what you wanted.  Or did I misread your original question?
> >> 
> >> Congress and Mistral are definitely not competing.Congress lets people 
> >> declare which states of the other OpenStack projects are permitted using a 
> >> general purpose policy language, but it does not try to make complex 
> >> changes (often requiring a workflow) to eliminate prohibited states.  
> >> Mistral lets people create a workflow that makes complex changes to other 
> >> OpenStack projects, but it doesn't have a general purpose policy language 
> >> that describes which states are permitted.  Congress and Mistral are 
> >> complementary, and each can stand on its own.
> > 
> > And why should not these two things be in a single project?
> > 
> > 
> > 
> >> 
> >> Tim
> >> 
> >> 
> >> On Mon, Nov 9, 2015 at 6:46 AM Adam Young  wrote:
> >> On 11/06/2015 06:28 PM, Tim Hinrichs wrote:
> >>> Congress allows users to write a policy that executes an action under 
> >>> certain conditions.
> >>> 
> >>> The conditions can be based on any data Congress has access to, which 
> >>> includes nova servers, neutron networks, cinder storage, keystone users, 
> >>> etc.  We also have some Ceilometer statistics; I'm not sure about whether 
> >>> it's easy to get the Keystone notifications that you're talking about 
> >>> today, but notifications are on our roadmap.  If the user's login is 
> >>> reflected in the Keystone API, we may already be getting that event.
> >>> 
> >>> The action could in theory be a mistral/heat API or an arbitrary script.  
> >>> Right now we're set up to invoke any method on any of the python-clients 
> >>> we've integrated with.  We've got an integration with heat but not 
> >>> mistral.  New integrations are typically easy.
> >> 
> >> Sounds like Mistral and Congress are competing here, then.  Maybe we 
> >> should merge those efforts.
> >> 
> >> 
> >>> 
> >>> Happy to talk more.
> >>> 
> >>> Tim
> >>> 
> >>> 
> >>> 
> >>> On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann  
> >>> wrote:
> >>> Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
> >>> > On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann  
> >>> > wrote:
> >>> >
> >>> > > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> >>> > > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> >>> > > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> >>> > > > > > Can people help me work through the right set of tools for this 
> >>> > > > > > use
> >>> > > case
> >>> > > > > > (has come up from several Operators) and map out a plan to 
> >>> > > > > > implement
> >>> > > it:
> >>> > > > > >
> >>> > > > > > Large cloud with many users coming from multiple Federation 
> >>> > > > > > sources
> >>> > > has
> >>> > > > > > a policy of providing a minimal setup for each user upon 

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-09 Thread Maish Saidel-Keesing

On 11/05/15 23:18, Fox, Kevin M wrote:

Your assuming there are only 2 choices,
  zk or db+rabbit. I'm claiming both hare suboptimal at present. a 3rd might be 
needed. Though even with its flaws, the db+rabbit choice has a few benefits too.

You also seem to assert that to support large clouds, the default must be 
something that can scale that large. While that would be nice, I don't think 
its a requirement if its overly burdensome on deployers of non huge clouds.

I don't have metrics, but I would be surprised if most deployments today 
(production + other) used 3 controllers with a full ha setup. I would guess 
that the majority are single controller setups. With those, the
I think it would be safe to assume - that any kind of production cloud - 
or any operator that considers their OpenStack environment something 
that is close to production ready - would not be daft enough to deploy 
their whole environment based on a single controller - which is a 
whopper of a single point of failure.


Most Fuel (mirantis) deployments are multiple controllers.
RHOS also recommends doing multiple controllers.

I don't think that we as a community can afford to assume that 1 
controller will suffice.

This does not say that maintaining zk will be any easier though.

overhead of maintaining a whole dlm like zk seems like overkill. If db+rabbit 
would work for that one case, that would be one less thing to have to setup for 
an op. They already have to setup db+rabbit. Or even a clm plugin of some sort, 
that won't scale, but would be very easy to deploy, and change out later when 
needed would be very useful.

etcd is starting to show up in a lot of other projects, and so it may be at 
sites already. being able to support it may be less of a burden to operators 
then zk in some cases.

If your cloud grows to the point where the dlm choice really matters for 
scalability/correctness, then you probably have enough staff members to deal 
with adding in zk, and that's probably the right choice.

You can have multiple suggested things in addition to one default. Default to the thing 
that makes the most sense in the common most deployments, and make specific 
recommendations for certain scenarios. like, "if greater then 100 nodes, we strongly 
recommend using zk" or something to that effect.

Thanks,
Kevin



From: Clint Byrum [cl...@fewbar.com]
Sent: Thursday, November 05, 2015 11:44 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager  
discussion @ the summit

Excerpts from Fox, Kevin M's message of 2015-11-04 14:32:42 -0800:

To clarify that statement a little more,

Speaking only for myself as an op, I don't want to support yet one more 
snowflake in a sea of snowflakes, that works differently then all the rest, 
without a very good reason.

Java has its own set of issues associated with the JVM. Care, and feeding sorts 
of things. If we are to invest time/money/people in learning how to properly 
maintain it, its easier to justify if its not just a one off for just DLM,

So I wouldn't go so far as to say we're vehemently opposed to java, just that 
DLM on its own is probably not a strong enough feature all on its own to 
justify requiring pulling in java. Its been only a very recent thing that you 
could convince folks that DLM was needed at all. So either make java optional, 
or find some other use cases that needs java badly enough that you can make 
java a required component. I suspect some day searchlight might be compelling 
enough for that, but not today.

As for the default, the default should be good reference. if most sites would 
run with etc or something else since java isn't needed, then don't default 
zookeeper on.


There are a number of reasons, but the most important are:

* Resilience in the face of failures - The current database+MQ based
   solutions are all custom made and have unknown characteristics when
   there are network partitions and node failures.
* Scalability - The current database+MQ solutions rely on polling the
   database and/or sending lots of heartbeat messages or even using the
   database to store heartbeat transactions. This scales fine for tiny
   clusters, but when every new node adds more churn to the MQ and
   database, this will (and has been observed to) be intractable.
* Tech debt - OpenStack is inventing lock solutions and then maintaining
   them. And service discovery solutions, and then maintaining them.
   Wouldn't you rather have better upgrade stories, more stability, more
   scale, and more featuers?

If those aren't compelling enough reasons to deploy a mature java service
like Zookeeper, I don't know what would be. But I do think using the
abstraction layer of tooz will at least allow us to move forward without
having to convince everybody everywhere that this is actually just the
path of least resistance.




--
Best Regards,
Maish Saidel-Keesing


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-09 Thread Shraddha Pandhe
Gary,

I agree. Moving away from that option, I am trying to propose the idea of
extended IPAM tables: https://bugs.launchpad.net/neutron/+bug/1513981 and
https://review.openstack.org/#/c/242688/

On Sun, Nov 8, 2015 at 12:10 AM, Gary Kotton  wrote:

> I think that id we can move to a versioned object model model then it will
> be better. Having random json blobs passed around is going to cause
> problems.
>
> From: "Armando M." 
> Reply-To: OpenStack List 
> Date: Wednesday, November 4, 2015 at 11:38 PM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam
> db tables
>
>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe 
> wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful. Some other projects already do things like
>> this.
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>> 'extras' arbitrary JSON field. This allows us to put any information in
>> there that we think is important for us.
>>
>
> I personally feel that relying on json blobs is not only dangerously
> affecting portability, but it causes us to bloat the business logic, and
> forcing us to be doing less efficient when querying/filtering data.
>
> Most importantly though, I feel it's like abdicating our responsibility to
> do a good design job. Ultimately, we should be able to identify how to
> model these extensions you're thinking of both conceptually and logically.
>
> I couldn't care less if other projects use it, but we ended up using in
> Neutron too, and since I lost this battle time and time again, all I am
> left with is this rant :)
>
>
>>
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>
>>
>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando > > wrote:
>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for versioning or
>>> portability purposes.
>>> The parameters that should end up in such blob are typically specific
>>> for the target IPAM driver (to an extent they might even identify a
>>> specific driver to use), and therefore an API consumer who knows what
>>> backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability and
>>> not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more input
>>> on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension are
>>> not a solution - assuming your granularity level is the allocation pool;
>>> indeed allocation pools are not first-class neutron resources, and it is
>>> not therefore possible to have APIs which associate vendor specific
>>> properties to allocation pools.
>>>
>>> Salvatore
>>>
>>> On 4 November 2015 at 21:46, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
 Hi folks,

 I have a small question/suggestion about IPAM.

 With IPAM, we are allowing users to have their own IPAM drivers so that
 they can manage IP allocation. The problem is, the new ipam tables in the
 database have the same columns as the old tables. So, as a user, if I want
 to have my own logic for ip allocation, I can't actually get any help from
 the database. Whereas, if we had an arbitrary json blob in the ipam tables,
 I could put any useful information/tags there, that can help me for
 allocation.

 Does this make sense?

 e.g. If I want to create multiple allocation pools in a subnet and use
 them for different purposes, I would need some sort of tag for each
 allocation pool for identification. Right now, there is no scope for doing
 something like that.

 Any thoughts? If there are any other way to solve the problem, please
 let me know





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

  1   2   >