Re: [openstack-dev] [Senllin][Magnum]Add container type profile to Senlin

2015-11-05 Thread xuanlangjian
Hi,
  Good to know senlin plan to support container. Will senlin directly talk to 
dock API or just talk to magnum API?
  When doing autoscaling, how does senlin work with native scaling function of 
k8s/swarm?

> On Nov 6, 2015, at 14:05, Haiwei Xu  wrote:
> 
> Hi all,
> 
> As we know, currently Senlin supports two kind of profiles: Nova instance and
> Heat stack, of course, we want to support container. After back from the 
> summit,
> I discussed it with a Magnum Core yuanying, we reached an agreement that 
> adding
> a container type profile support in Senlin. Maybe this idea is already 
> thought about by you
> guys.
> Our general idea is Senlin makes a request to Docker API to start/from a 
> container to/from
> a Magnum Bay, the container will be shown in the senlin node-list like nova 
> instance
> And heat stack, and can also be added to one cluster or doing auto-scaling.
> Here is the profile file example:
> 
> type: os.magnum.swarm.container
> version: 1.0
> properties:
>  bay_id: swarm_bay
>  compose_file: docker-compose.yaml
> 
> or:
> 
> type: os.magnum.kubernetes.container
> version: 1.0
> properties:
>  bay_id: kubernetes_bay
>  manifest: replication_controller.yaml
> 
> We will support two kinds of container creation.
> What is your thought about this? Any comments are welcome.
> 
> Regards,
> Xuhaiwei
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread Nader Lahouti
Thanks Gord for the explanation.

Nader.

On Thu, Nov 5, 2015 at 11:49 AM, gord chung  wrote:

> my understanding is that if you are calling stop()/wait() your intention
> is to shut down the listener. if you intend on keeping an active consumer
> on the queue, you shouldn't be calling either stop() or wait(), just start.
>
>
> On 05/11/2015 2:07 PM, Nader Lahouti wrote:
>
>
> Thanks for the pointer, I'll look into it. But one question, by calling
> stop() and then wait(), does it mean the application has to call start()
> again after the wait()? to process more messages?
>
> I am also using
> http://docs.openstack.org/developer/oslo.messaging/server.html for the
> RPC server
> Does it mean there has to be stop() and then wait() there as well?
>
>
> Thanks,
> Nader.
>
>
>
> On Thu, Nov 5, 2015 at 10:19 AM, gord chung  wrote:
>
>>
>>
>> On 05/11/2015 1:06 PM, Nader Lahouti wrote:
>>
>>> Hi Doug,
>>>
>>> I have an app that listens to notifications and used the info provided in
>>>
>>> http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
>>>
>>>
>>> Basically I create
>>> 1. NotificationEndpoints(object):
>>>
>>> https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
>>> 2. NotifcationListener(object):
>>>
>>> https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
>>> 3. and call start() and  then wait()
>>>
>>
>> the correct usage is to call stop() before wait()[1]. for reference on
>> how to use listeners, you can see Ceilometer[2]
>>
>> [1]
>> http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
>> [2]
>> https://github.com/openstack/ceilometer/blob/master/ceilometer/utils.py#L250
>>
>> --
>> gord
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> gord
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-05 Thread Tony Breeds
Hello all,

I'll start by acknowledging that this is a big and complex issue and I
do not claim to be across all the view points, nor do I claim to be
particularly persuasive ;P

Having stated that, I'd like to seek constructive feedback on the idea of
keeping Juno around for a little longer.  During the summit I spoke to a
number of operators, vendors and developers on this topic.  There was some
support and some "That's crazy pants!" responses.  I clearly didn't make it
around to everyone, hence this email.

Acknowledging my affiliation/bias:  I work for Rackspace in the private
cloud team.  We support a number of customers currently running Juno that are,
for a variety of reasons, challenged by the Kilo upgrade.

Here is a summary of the main points that have come up in my conversations,
both for and against.

Keep Juno:
 * According to the current user survey[1] Icehouse still has the
   biggest install base in production clouds.  Juno is second, which makes
   sense. If we EOL Juno this month that means ~75% of production clouds
   will be running an EOL'd release.  Clearly many of these operators have
   support contracts from their vendor, so those operators won't be left 
   completely adrift, but I believe it's the vendors that benefit from keeping
   Juno around. By working together *in the community* we'll see the best
   results.

 * We only recently EOL'd Icehouse[2].  Sure it was well communicated, but we
   still have a huge Icehouse/Juno install base.

For me this is pretty compelling but for balance  

Keep the current plan and EOL Juno Real Soon Now:
 * There is also no ignoring the elephant in the room that with HP stepping
   back from public cloud there are questions about our CI capacity, and
   keeping Juno will have an impact on that critical resource.

 * Juno (and other stable/*) resources have a non-zero impact on *every*
   project, esp. @infra and release management.  We need to ensure this
   isn't too much of a burden.  This mostly means we need enough trustworthy
   volunteers.

 * Juno is also tied up with Python 2.6 support. When
   Juno goes, so will Python 2.6 which is a happy feeling for a number of
   people, and more importantly reduces complexity in our project
   infrastructure.

 * Even if we keep Juno for 6 months or 1 year, that doesn't help vendors
   that are "on the hook" for multiple years of support, so for that case
   we're really only delaying the inevitable.

 * Some number of the production clouds may never migrate from $version, in
   which case longer support for Juno isn't going to help them.


I'm sure these question were well discussed at the VYR summit where we set
the EOL date for Juno, but I was new then :) What I'm asking is:

1) Is it even possible to keep Juno alive (is the impact on the project as
   a whole acceptable)?

Assuming a positive answer:

2) Who's going to do the work?
- Me, who else?
3) What do we do if people don't actually do the work but we as a community
   have made a commitment?
4) If we keep Juno alive for $some_time, does that imply we also bump the
   life cycle on Kilo and liberty and Mitaka etc?

Yours Tony.

[1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
(page 20)
[2] http://git.openstack.org/cgit/openstack/nova/tag/?h=icehouse-eol



pgpzQJvMDmBfU.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Senllin][Magnum]Add container type profile to Senlin

2015-11-05 Thread Haiwei Xu
Hi all,

As we know, currently Senlin supports two kind of profiles: Nova instance and
Heat stack, of course, we want to support container. After back from the summit,
I discussed it with a Magnum Core yuanying, we reached an agreement that adding
a container type profile support in Senlin. Maybe this idea is already thought 
about by you
guys.
Our general idea is Senlin makes a request to Docker API to start/from a 
container to/from
a Magnum Bay, the container will be shown in the senlin node-list like nova 
instance
And heat stack, and can also be added to one cluster or doing auto-scaling.
Here is the profile file example:

type: os.magnum.swarm.container
version: 1.0
properties:
  bay_id: swarm_bay
  compose_file: docker-compose.yaml

or:

type: os.magnum.kubernetes.container
version: 1.0
properties:
  bay_id: kubernetes_bay
  manifest: replication_controller.yaml

We will support two kinds of container creation.
What is your thought about this? Any comments are welcome.

Regards,
Xuhaiwei


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-05 Thread Tony Breeds
Hello all,
I came across [1] which is notionally an ironic bug in that horizon presents
VM operations (like suspend) to users.  Clearly these options don't make sense
to ironic which can be confusing.

There is a horizon fix that just disables migrate/suspened and other functaions
if the operator sets a flag say ironic is present.  Clealy this is sub optimal
for a mixed hv environment.

The data needed (hpervisor type) is currently avilable only to admins, a quick
hack to remove this policy restriction is functional.

There are a few ways to solve this.

 1. Change the default from "rule:admin_api" to "" (for 
os_compute_api:os-extended-server-attributes and
os_compute_api:os-hypervisors), and set a list of values we're
comfortbale exposing the user (hypervisor_type and
hypervisor_hostname).  So a user can get the hypervisor_name as part of
the instance deatils and get the hypervisor_type from the
os-hypervisors.  This would work for horizon but increases the API load
on nova and kinda implies that horizon would have to cache the data and
open-code assumptions that hypervisor_type can/can't do action $x

 2. Include the hypervisor_type with the instance data.  This would place the 
burdon on nova.  It makes the looking up instance details slightly more
complex but doesn't result in additional API queries, nor caching
overhead in horizon.  This has the same opencoding issues as Option 1.

 3. Define a service user and have horizon look up the hypervisors details via 
that role.  Has all the drawbacks as option 1 and I'm struggling to
think of many benefits.

 4. Create a capabilitioes API of some description, that can be queried so that
consumers (horizon) can known

 5. Some other way for users to know what kind of hypervisor they're on, Perhaps
there is an established image property that would work here?

If we're okay with exposing the hypervisor_type to users, then #2 is pretty
quick and easy, and could be done in Mitaka.  Option 4 is probably the best
long term solution but I think is best done in 'N' as it needs lots of
discussion.

Yours Tony.

[1] https://bugs.launchpad.net/nova/+bug/1483639


pgpJRBy6xvvSg.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Deprecation of OFAgent in Mitaka

2015-11-05 Thread fumihiko kakuma
Hi,

The ryu team added the ofagent as a ml2 driver that implements
a python native openflow using ryu library.

In Liberty, the ovs ml2 driver gained the "native" of_interface
driver, which uses the ryu library to communicate with ovs switches.
The ryu team believes this is better solution than the ofagent driver.

Then we plan to deprecate ofagent in Mitaka and remove
it in next N cycle.


Thanks,
fumihiko kakuma

-- 
fumihiko kakuma 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] attaching and detaching volumes in the API

2015-11-05 Thread Chris Friesen

On 11/05/2015 12:13 PM, Murray, Paul (HP Cloud) wrote:


As part of this spec: https://review.openstack.org/#/c/221732/

I want to attach/detach volumes (and so manipulate block device mappings) when
an instance is not on any compute node (actually when in shelved). Normally this
happens in a function on the compute manager synchronized on the instance uuid.
When an instance is in the shelved_offloaded state it is not on a compute host,
so the operations have to be done at the API (an existing example is when the
instance deleted in this state – the cleanup is done in the API but is not
synchronized in this case).

One option I can see is using tack states, using expected_task_state parameter
in instance.save() to control state transitions. In the API this makes sense as
the calls will be synchronous so if an operation cannot be done it can be
reported back to the user in an error return. I’m sure there must be some other
options.


Whatever you do requires a single synchronization point.  If we can't use 
nova-compute, the only other option is the database.   (Since we don't yet have 
a DLM.)


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Chris Friesen

On 11/05/2015 08:33 AM, Andrew Laski wrote:

On 11/05/15 at 01:28pm, Murray, Paul (HP Cloud) wrote:



Or more specifically, the migrate and resize API actions both call the resize
function in the compute api. As Ed said, they are basically the same behind
the scenes. (But the API difference is important.)


Can you be a little more specific on what API difference is important to you?
There are two differences currently between migrate and resize in the API:

1. There is a different policy check, but this only really protects the next 
bit.

2. Resize passes in a new flavor and migration does not.

Both actions result in an instance being scheduled to a new host.  If they were
consolidated into a single action with a policy check to enforce that users
specified a new flavor and admins could leave that off would that be problematic
for you?



To me, the fact that resize and cold migration share the same implementation is 
just that, an implementation detail.


From the outside they are different things...one is "take this instance and 
move it somewhere else", and the other "take this instance and change its 
resource profile".


To me, the external API would make more sense as:

1) resize

2) migrate (with option of cold or live, and with option to specify a 
destination, and with option to override the scheduler if the specified 
destination doesn't pass filters)



And while we're talking, I don't understand why "allow_resize_to_same_host" 
defaults to False.  The comments in https://bugs.launchpad.net/nova/+bug/1251266 
say that it's not intended to be used in production, but doesn't give a 
rationale for that statement.  If you're using local storage and you just want 
to add some more CPUs/RAM to the instance, wouldn't it be beneficial to avoid 
the need to copy the rootfs?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 6 November 2015

2015-11-05 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

Wow! What a great Summit! And isn't Tokyo a truly beautiful and amazing city? 
Thank you so much to the Japanese Stackers who hosted us, and to everyone who 
came along to the docs sessions and helped us hammer out a great plan for 
Mitaka. I'm very excited about this release!

This week, I've been catching up with everything that happened during Summit, 
and also working on outreach tasks. Today, I recorded an interview with 
Foundation about the project, and I've also published a blog post on the same 
topic, which will be published soon. You might also like to check out the 
Superuser interview Anne and I did about the docs while we were in Tokyo: 
http://superuser.openstack.org/articles/openstack-documentation-why-it-s-important-and-how-you-can-contribute

== Progress towards Mitaka ==

152 days to go!

77 bugs closed so far for this release.

API Docs
* The API docs are will be switched to swagger: 
http://specs.openstack.org/openstack/docs-specs/specs/liberty/api-site.html

DocImpact
* I've removed the WIP from this blueprint, and will be working on this from 
next week: 
https://blueprints.launchpad.net/openstack-manuals/+spec/review-docimpact

RST Conversions
* Arch Guide
** https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-rst
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* Ops Guide
** https://blueprints.launchpad.net/openstack-manuals/+spec/ops-guide-rst
** Lana will reach out to O'Reilly to discuss the printed book before this work 
begins
* Config Ref
** Thanks for all the offers of help on this one! Please contact the Config Ref 
Speciality team: https://wiki.openstack.org/wiki/Documentation/ConfigRef

Reorganisations
* Arch Guide
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* User Guides
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
** Contact the User Guide Speciality team: 
https://wiki.openstack.org/wiki/User_Guides

Training
* Labs
** https://blueprints.launchpad.net/openstack-manuals/+spec/training-labs
* Guides
** Upstream University & 'core' component updates, EOL Upstream Wiki page.

Document openstack-doc-tools
* Need volunteers for this!

Reorganise index page
* The API docs have already moved off the front page
* We need volunteers to look at the organisation of this page and to write 
one-sentence summaries of each book

== Doc team meeting ==

Meetings will kick off again next week with the APAC meeting:

APAC: Wednesday 11 November, 00:30:00 UTC
US: Wednesday 18 November, 14:00 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

- --

Keep on doc'ing!
Lana


- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWPC4TAAoJELppzVb4+KUynpUH/30Y6pv7Zrse+YM1ki2pqLqi
dp0f9RysJQkvXOA7OWy48kWLWXgMF0/hq1DIhrZ9AlsUCGOGC04/YVGNyaCAxMkx
TTpyi6gJWl9Fiwbrc6k63MPx7OMFDcGu8KQow7tCBewH0jYngiJeP/mxIP6AnhBy
SHVsZZ4OG99w/xZyUe8rVGkpXLFUfow8u0r4hCLlFGSUxLD3jz8ABp2HX7mf3ICi
0u1rgxD08lWSHPHRmhzUZ+kx7uW1ZY0UWiX1rsyTU690dsYbYeJSUcY+Saf2md52
eIGk7yevZaYczXvn9vo/rfwZCc4G5jyRFp55yR/BfD/2NVsjmt2vvguWTnDywEE=
=Y0ik
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-05 Thread Vikas Choudhary
@Gal, I was asking about "container in nova vm" case.
Not sure if you were referring to this case as nested containers case. I
guess nested containers case would be "containers inside containers" and
this could be hosted on nova vm and nova bm node. Is my understanding
correct?

Thanks Gal and Toni, for now i got answer to my query related to "container
in vm" case.

-Vikas

On Thu, Nov 5, 2015 at 6:00 PM, Gal Sagie  wrote:

> The current OVS binding proposals are not for nested containers.
> I am not sure if you are asking about that case or about the nested
> containers inside a VM case.
>
> For the nested containers, we will use Neutron solutions that support this
> kind of configuration, for example
> if you look at OVN you can define "parent" and "sub" ports, so OVN knows
> to perform the logical pipeline in the compute host
> and only perform VLAN tagging inside the VM (as Toni mentioned)
>
> If you need more clarification you can catch me on IRC as well and we can
> talk.
>
> On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi All,
>>
>> I would appreciate inputs on following queries:
>> 1. Are we assuming nova bm nodes to be docker host for now?
>>
>> If Not:
>>  - Assuming nova vm as docker host and ovs as networking plugin:
>> This line is from the etherpad[1], "Eachdriver would have an
>> executable that receives the name of the veth pair that has to be bound to
>> the overlay" .
>> Query 1:  As per current ovs binding proposals by Feisky[2]
>> and Diga[3], vif seems to be binding with br-int on vm. I am unable to
>> understand how overlay will work. AFAICT , neutron will configure br-tun of
>> compute machines ovs only. How overlay(br-tun) configuration will happen
>> inside vm ?
>>
>>  Query 2: Are we having double encapsulation(both at vm and
>> compute)? Is not it possible to bind vif into compute host br-int?
>>
>>  Query3: I did not see subnet tags for network plugin being
>> passed in any of the binding patches[2][3][4]. Dont we need that?
>>
>>
>> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
>> [2]  https://review.openstack.org/#/c/241558/
>> [3]  https://review.openstack.org/#/c/232948/1
>> [4]  https://review.openstack.org/#/c/227972/
>>
>>
>> -Vikas Choudhary
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-05 Thread Vikas Choudhary
@Taku,

Please have a look on this discussion. This is all about local and global
scope:
https://github.com/docker/libnetwork/issues/486


Plus, I used same docker options as you mentioned. Fact that it was working
for networks created with overlay driver making me think it was not a
configuration issue. Only networks created with kuryr were not getting
synced.


Thanks
Vikas Choudhary

On Fri, Nov 6, 2015 at 8:07 AM, Taku Fukushima 
wrote:

> Hi Vikas,
>
> I thought the "capability" affected the propagation of the network state
> across nodes as well. However, in my environment, where I tried Consul and
> ZooKeeper, I observed a new network created in a host is displayed on
> another host when I hit "sudo docker network ls" even if I set the
> capability to "local", which is the current default. So I'm just wondering
> what this capability means. The spec doesn't say much about it.
>
>
> https://github.com/docker/libnetwork/blob/8d03e80f21c2f21a792efbd49509f487da0d89cc/docs/remote.md#set-capability
>
> I saw your bug report that describes the network state propagation didn't
> happen appropriately. I also experienced the issue and I'd say it would be
> the configuration issue. Please try with the following option. I'm putting
> it in /etc/default/docker and managing the docker daemon through "service"
> command.
>
> DOCKER_OPTS="-D -H unix:///var/run/docker.sock -H :2376
> --cluster-store=consul://192.168.11.14:8500 --cluster-advertise=
> 192.168.11.18:2376"
>
> The network is the only user facing entity in libnetwork for now since the
> concept of the "service" is abandoned in the stable Docker 1.9.0 release
> and it's shared by libnetwork through libkv across multiple hosts. Endpoint
> information is stored as a part of the network information as you
> documented in the devref and the network is all what we need so far.
>
>
> https://github.com/openstack/kuryr/blob/d1f4272d6b6339686a7e002f8af93320f5430e43/doc/source/devref/libnetwork_remote_driver_design.rst#libnetwork-user-workflow-with-kuryr-as-remote-network-driver---host-networking
>
> Regarding changing the capability to "global", it totally makes sense and
> we should change it despite the networks would be shared among multiple
> hosts anyways.
>
> Best regards,
> Taku Fukushima
>
>
> On Thu, Nov 5, 2015 at 8:39 PM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Thanks Toni.
>> On 5 Nov 2015 16:02, "Antoni Segura Puimedon" <
>> toni+openstac...@midokura.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 5, 2015 at 10:47 AM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 ++ [Neutron] tag


 On Thu, Nov 5, 2015 at 10:40 AM, Vikas Choudhary <
 choudharyvika...@gmail.com> wrote:

> Hi all,
>
> By network control plane i specifically mean here sharing network
> state across docker daemons sitting on different hosts/nova_vms in
> multi-host networking.
>
> libnetwork provides flexibility where vendors have a choice between
> network control plane to be handled by libnetwork(libkv) or remote driver
> itself OOB. Vendor can choose to "mute" libnetwork/libkv by advertising
> remote driver capability as "local".
>
> "local" is our current default "capability" configuration in kuryr.
>
> I have following queries:
> 1. Does it mean Kuryr is taking responsibility of sharing network
> state across docker daemons? If yes, network created on one docker host
> should be visible in "docker network ls" on other hosts. To achieve this, 
> I
> guess kuryr driver will need help of some distributed data-store like
> consul etc. so that kuryr driver on other hosts could create network in
> docker on other hosts. Is this correct?
>
> 2. Why we cannot  set default scope as "Global" and let libkv do the
> network state sync work?
>
> Thoughts?
>

>>> Hi Vikas,
>>>
>>> Thanks for raising this. As part of the current work on enabling
>>> multi-node we should be moving the default to 'global'.
>>>
>>>

> Regards
> -Vikas Choudhary
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cg

[openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-05 Thread Alex Xu
Hi, folks

Nova API sub-team is working on the swagger generation. And there is PoC
https://review.openstack.org/233446

But before we are going to next step, I really hope we can get agreement
with how to support Microversions and Actions. The PoC have demo about
Microversions. It generates min version action as swagger spec standard,
for the other version actions, it named as extended attribute, like:

{
'/os-keypairs': {
"get": {
'x-start-version': '2.1',
'x-end-version': '2.1',
'description': '',
   
},
"x-get-2.2-2.9": {
'x-start-version': '2.2',
'x-end-version': '2.9',
'description': '',
.
}
}
}

x-start-version and x-end-version are the metadata for Microversions, which
should be used by UI code to parse.

This is just based on my initial thought, and there is another thought is
generating a set full swagger specs for each Microversion. But I think how
to show Microversions and Actions should be depended how the doc UI to
parse that also.

As there is doc project to turn swagger to UI:
https://github.com/russell/fairy-slipper  But it didn't support
Microversions. So hope doc team can work with us and help us to find out
format to support Microversions and Actions which good for UI parse and
swagger generation.

Any thoughts folks?

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change VIP address via API

2015-11-05 Thread Mike Scherbakov
Is there a way to make it more generic, not "VIP" specific? Let's say I
want to reserve address(-es) for something for whatever reason, and then I
want to use them by some tricky way.
More specifically, can we reserve IP address(-es) with some codename, and
use it later?
12.12.12.12 - my-shared-ip
240.0.0.2 - my-multicast
and then use them in puppet / whatever deployment code by $my-shared-ip,
$my-multicast?

Thanks,

On Tue, Nov 3, 2015 at 8:49 AM Aleksey Kasatkin 
wrote:

> Folks,
>
> Here is a resume of our recent discussion:
>
> 1. Add new URLs for processing VIPs:
>
> /clusters//network_configuration/vips/ (GET)
> /clusters//network_configuration/vips// (GET, PUT)
>
> where  is the id in ip_addrs table.
> So, user can get all VIPS, get one VIP by id, change parameters (IP
> address) for one VIP by its id.
> More possibilities can be added later.
>
> Q. Any allocated IP could be accessible via these handlers, so now we can
> restrict user to access VIPs only
> and answer with some error to other ip_addrs ids.
>
> 2. Add current VIP meta into ip_addrs table.
>
> Create new field in ip_addrs table for placing VIP metadata there.
> Current set of ip_addrs fields:
> id (int),
> network (FK),
> node (FK),
> ip_addr (string),
> vip_type (string),
> network_data (relation),
> node_data (relation)
>
> Q. We could replace vip_type (it contains VIP name now) with vip_info.
>
> 3. Allocate VIPs on cluster creation and seek VIPs at all network changes.
>
> So, VIPs will be checked (via network roles descriptions) and re-allocated
> in ip_addrs table
> at these points:
> a. create cluster
> b. modify networks configuration
> c. modify one network
> d. modify network template
> e. change nodes set for cluster
> f. change node roles set on nodes
> g. modify cluster attributes (change set of plugins)
> h. modify release
>
> 4. Add 'manual' field into VIP meta to indicate whether it is
> auto-allocated or not.
>
> So, whole VIP description may look like:
> {
> 'name': 'management'
> 'network_role': 'mgmt/vip',
> 'namespace': 'haproxy',
> 'node_roles': ['controller'],
> 'alias': 'management_vip',
> 'manual': True,
> }
>
> Example of current VIP description:
>
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L207
>
> Nailgun will re-allocate VIP address if 'manual' == False.
>
> 5. Q. what to do when the given address overlaps with the network from
> another
> environment? overlaps with the network of current environment which does
> not match the
> network role of the VIP?
>
> Use '--force' parameter to change it. PUT will fail otherwise.
>
>
> Guys, please review this and share your comments here,
>
> Thanks,
>
>
>
> Aleksey Kasatkin
>
>
> On Tue, Nov 3, 2015 at 10:47 AM, Aleksey Kasatkin 
> wrote:
>
>> Igor,
>>
>> > For VIP allocation we should use POST request. It's ok to use PUT for
>> setting (changing) IP address.
>>
>> My proposal is about setting IP addresses for VIPs only (auto and
>> manual).
>> No any other allocations.
>> Do you propose to use POST for first-time IP allocation and PUT for IP
>> re-allocation?
>> Or use POST for adding entries to some new 'vips' table (so that all VIPs
>> descriptions
>> will be added there from network roles)?
>>
>> > We don't store network_role, namespace and node_roles within VIPs.
>> > They are belonged to network roles. So how are you going to retrieve
>> > them? Did you plan to make some changes to our data model? You know,
>> > it's not a good idea to make connections between network roles and
>> > VIPs each time your make a GET request to list them.
>>
>> It's our current format we use in API when VIPs are being retrieved.
>> Do you propose to use different one for address allocation?
>>
>> > Should we return VIPs that aren't allocated, and if so - why? If they
>> > would be just, you know, fetched from network roles - that's a bad
>> > design. Each VIP should have an explicit entry in VIPs database table.
>>
>> I propose to return VIPs even w/o IP addresses to show user what VIPs he
>> has
>> so he can assign IP addresses to them. Yes, I supposed that the
>> information
>> will be retrieved from network roles as it is done now. Do you propose to
>> create
>> separate table for VIPs or extend ip_addrs table to store VIPs
>> information?
>>
>> > We definitely should handle `null` this way, but I think from API POV
>> > it would be more clearer just do not pass `ipaddr` value if user wants
>> > it to be auto allocated. I mean, let's keep `null` as implementation
>> > details ans force API users just do not pass this key if they don't
>> > want to.
>>
>> Oh, I didn't write it here, I thought about keeping IP addresses as is
>> when
>> corresponding key is skipped by the user.
>>
>> >The thing is that there's no way to *warn* users through API. You
>> > could either reject or accept request. So all we can do is to
>> > introduce some `force` flag, and if it's passed - ignore

Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-05 Thread Taku Fukushima
Hi Vikas,

I thought the "capability" affected the propagation of the network state
across nodes as well. However, in my environment, where I tried Consul and
ZooKeeper, I observed a new network created in a host is displayed on
another host when I hit "sudo docker network ls" even if I set the
capability to "local", which is the current default. So I'm just wondering
what this capability means. The spec doesn't say much about it.

https://github.com/docker/libnetwork/blob/8d03e80f21c2f21a792efbd49509f487da0d89cc/docs/remote.md#set-capability

I saw your bug report that describes the network state propagation didn't
happen appropriately. I also experienced the issue and I'd say it would be
the configuration issue. Please try with the following option. I'm putting
it in /etc/default/docker and managing the docker daemon through "service"
command.

DOCKER_OPTS="-D -H unix:///var/run/docker.sock -H :2376
--cluster-store=consul://192.168.11.14:8500 --cluster-advertise=
192.168.11.18:2376"

The network is the only user facing entity in libnetwork for now since the
concept of the "service" is abandoned in the stable Docker 1.9.0 release
and it's shared by libnetwork through libkv across multiple hosts. Endpoint
information is stored as a part of the network information as you
documented in the devref and the network is all what we need so far.

https://github.com/openstack/kuryr/blob/d1f4272d6b6339686a7e002f8af93320f5430e43/doc/source/devref/libnetwork_remote_driver_design.rst#libnetwork-user-workflow-with-kuryr-as-remote-network-driver---host-networking

Regarding changing the capability to "global", it totally makes sense and
we should change it despite the networks would be shared among multiple
hosts anyways.

Best regards,
Taku Fukushima


On Thu, Nov 5, 2015 at 8:39 PM, Vikas Choudhary 
wrote:

> Thanks Toni.
> On 5 Nov 2015 16:02, "Antoni Segura Puimedon" <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Thu, Nov 5, 2015 at 10:47 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> ++ [Neutron] tag
>>>
>>>
>>> On Thu, Nov 5, 2015 at 10:40 AM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 Hi all,

 By network control plane i specifically mean here sharing network state
 across docker daemons sitting on different hosts/nova_vms in multi-host
 networking.

 libnetwork provides flexibility where vendors have a choice between
 network control plane to be handled by libnetwork(libkv) or remote driver
 itself OOB. Vendor can choose to "mute" libnetwork/libkv by advertising
 remote driver capability as "local".

 "local" is our current default "capability" configuration in kuryr.

 I have following queries:
 1. Does it mean Kuryr is taking responsibility of sharing network state
 across docker daemons? If yes, network created on one docker host should be
 visible in "docker network ls" on other hosts. To achieve this, I guess
 kuryr driver will need help of some distributed data-store like consul etc.
 so that kuryr driver on other hosts could create network in docker on other
 hosts. Is this correct?

 2. Why we cannot  set default scope as "Global" and let libkv do the
 network state sync work?

 Thoughts?

>>>
>> Hi Vikas,
>>
>> Thanks for raising this. As part of the current work on enabling
>> multi-node we should be moving the default to 'global'.
>>
>>
>>>
 Regards
 -Vikas Choudhary

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Logging - filling up my tiny SSDs

2015-11-05 Thread Tony Breeds
On Thu, Nov 05, 2015 at 09:52:07PM +, Sean M. Collins wrote:

> I'll make sure to name the variable appropriately. Some ideas:
> 
> SEAN_COLLINS_CREEPY_BASEMENT_DEVSTACK_LAB
> SEANS_DISCOUNT_DEVSTACK_EMPORIUM
> ANT_SIZED_SSD

ALL_YOUR_DISK_ARE_BELONG_TO_SCREEN?

Yours Tony.


pgp8xQipI_y6j.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [Bug#1497073]The return sample body of sample-list is different when use -m and not

2015-11-05 Thread liusheng
I don't think we need two APIs to act duplicated functionalities, the 
"sample-list -m" command actually invoke API "GET 
/V2/meters/", it is more like a meter related API, not 
sample. I personally prefer to mark the "sample-list -m" command 
deprecated and dropped in future cycle. is this reasonable ?


在 2015/11/6 6:39, gord chung 写道:
i'm sort of torn on this item. there's a general feeling that 
regarding api, nothing should be dropped so i'm hesitant to actually 
deprecate it. i think changing the data also is very dangerous when it 
comes to compatibility (even though keeping it increases inconsistency).


maybe the better solution is to document that these are different APIs 
and will return different results.


On 05/11/2015 2:30 AM, Lin Juan IX Xia wrote:

Hi,

Here is an open bug : https://bugs.launchpad.net/ceilometer/+bug/1497073

Is it a bug or not?

For the command "ceilometer sample-list --meter cpu", it calls 
"/v2/meter" API and return the OldSample objects
which return body is different from "ceilometer sample-list --query 
'meter=cpu'".
To fix this inconformity, we can deprecate the command using -m or 
fix it to return the same body as command sample-list

Best Regards,
Xia Linjuan




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] distributing work using work items - call for participation in distributed blueprint development

2015-11-05 Thread Steven Dake (stdake)
HI folks,

Sam Yaple had suggested we try using Work Items to track our work rather then 
Etherpad for complex distributed tasks.  I've picked a pretty easy blueprint 
which should be mostly one line patches where everyone can chip in.  The work 
should be pretty easy, even for new contributors to the project - so please 
feel free to sign up for contributing work even if you are new to the project.  
If your unable to set your name in the work items field, ping sdake on irc to 
add you to the kolla-drivers group.

The blueprint is:
https://blueprints.launchpad.net/kolla/+spec/drop-root

The goal of the blueprint is to run the processes for each container as the 
correct UID instead of root (except for the case where the container requires 
root to do its job).  These are easy to pick out in the ansible files by the 
privileged: true flag.  The real goal of this blueprint is to test if this new 
work items workflow is faster and more effective then etherpad (while also 
delivering this essential security work for mitaka-1 (deadline December 4th).

Please take a moment to sign up for 1-4 container sets.  To do that, click the 
Yellow checkbox in the work items field in launchpad, and then replace the 
"unassigned" entry next to the work item with your irc nickname.  I'd like this 
work to finish as rapidly as possible, so please try to knock out the work by 
next Friday (November 13th).  Please try to complete the work if you assign 
yourself to the container set by November 13th.

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Pagination in thre API

2015-11-05 Thread Zhenyu Zheng
So lets work on the API WG guideline first, looking forward to get it done
sooner, pagination is actually very useful in production deployment.

On Thu, Nov 5, 2015 at 11:16 PM, Everett Toews 
wrote:

> On Nov 5, 2015, at 5:44 AM, John Garbutt  wrote:
>
>
> On 5 November 2015 at 09:46, Richard Jones  wrote:
>
> As a consumer of such APIs on the Horizon side, I'm all for consistency in
> pagination, and more of it, so yes please!
>
> On 5 November 2015 at 13:24, Tony Breeds  wrote:
>
>
> On Thu, Nov 05, 2015 at 01:09:36PM +1100, Tony Breeds wrote:
>
> Hi All,
>Around the middle of October a spec [1] was uploaded to add
> pagination
> support to the os-hypervisors API.  While I recognize the use case it
> seemed
> like adding another pagination implementation wasn't an awesome idea.
>
> Today I see 3 more requests to add pagination to APIs [2]
>
> Perhaps I'm over thinking it but should we do something more strategic
> rather
> than scattering "add pagination here".
>
>
> +1
>
> The plan, as I understand it, is to first finish off this API WG guideline:
>
> http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html
>
>
>
> An attempt at an API guideline for pagination is here [1] but hasn't
> received any updates in over a month, which can be understandable as
> sometimes other work takes precedence.
>
> Perhaps we can get that guideline moving again?
>
> If it's becoming difficult to reach agreement on that approach in the
> guideline, it could be worthwhile to take a step back and do some analysis
> on the way pagination is done in the more established APIs. I've found that
> such analysis can be very helpful as you're moving forward from a known
> state.
>
> The place for that analysis is in Current Design [2] by filling in the
> Pagination page. You can find many examples of such analysis from the
> Current Design like Sorting [3].
>
> Cheers,
> Everett
>
>
> [1] https://review.openstack.org/#/c/190743/
> [2] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design
> [3]
> https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Sorting
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting minutes - 11/5/2015

2015-11-05 Thread Christopher Aedo
We had a nice meeting this morning, though it was a touch light on
attendance (likely due to the combined  joys of daylight savings time
and jet lag).  In addition to a recap from the summit sessions and
talking a bit about taking next steps with the API work, we talked
about setting a second time for the IRC meeting.

In the next week or so I'll work on finding someone who can chair a
meeting in an Australia/Asian time zone, and we'll transition to
switching between the two meeting times.  If you have the spare
capacity to chair an IRC meeting, please let me know!

Please join us on #openstack-app-catalog - thanks!

-Christopher

=
#openstack-meeting-3: app-catalog
=
Meeting started by docaedo at 17:00:49 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/app_catalog/2015/app_catalog.2015-11-05-17.00.log.html
.
Meeting summary
---
* rollcall  (docaedo, 17:01:24)
  * LINK:

https://wiki.openstack.org/wiki/Meetings/app-catalog#Proposed_Agenda_for_November_5th.2C_2015_.281700_UTC.29
(docaedo, 17:02:38)
* Tokyo Summit update (docaedo)  (docaedo, 17:05:18)
  * LINK:
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078217.html
(docaedo, 17:07:17)
  * LINK: https://etherpad.openstack.org/p/TYO-app-catalog  (docaedo,
17:13:21)
  * LINK:
https://etherpad.openstack.org/p/murano-mitaka-contributors-meetup
(docaedo, 17:13:24)
* status updates  (docaedo, 17:17:28)
* Alternating times/regions for IRC meeting  (docaedo, 17:19:34)
* Next steps for API work  (docaedo, 17:31:59)
* Open discussion  (docaedo, 17:42:47)

Meeting ended at 18:01:26 UTC.

People present (lines said)
---
* docaedo (96)
* kzaitsev_mb (37)
* drwahl (27)
* j_king (7)
* openstack (3)
* kfox_ (2)
* kfox (1)

Generated by `MeetBot`_ 0.1.4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Learning to Debug the Gate

2015-11-05 Thread Anita Kuno
On 11/03/2015 05:30 PM, Anita Kuno wrote:
> On 11/02/2015 12:39 PM, Anita Kuno wrote:
>> On 10/29/2015 10:42 PM, Anita Kuno wrote:
>>> On 10/29/2015 08:27 AM, Anita Kuno wrote:
 On 10/28/2015 12:14 AM, Matt Riedemann wrote:
>
>
> On 10/27/2015 4:08 AM, Anita Kuno wrote:
>> Learning how to debug the gate was identified as a theme at the
>> "Establish Key Themes for the Mitaka Cycle" cross-project session:
>> https://etherpad.openstack.org/p/mitaka-crossproject-themes
>>
>> I agreed to take on this item and facilitate the process.
>>
>> Part one of the conversation includes referencing this video created by
>> Sean Dague and Dan Smith:
>> https://www.youtube.com/watch?v=fowBDdLGBlU
>>
>> Please consume this as you are able.
>>
>> Other suggestions for how to build on this resource were mentioned and
>> will be coming in the future but this was an easy, actionable first step.
>>
>> Thank you,
>> Anita.
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/tales-from-the-gate-how-debugging-the-gate-helps-your-enterprise
>
>

 The source for the definition of "the gate":
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n34

 Thanks for following along,
 Anita.

>>>
>>> This is the status page showing the status of our running jobs,
>>> including patches in the gate pipeline: http://status.openstack.org/zuul/
>>>
>>> Thank you,
>>> Anita.
>>>
>>
>> This is a simulation of how the gate tests patches:
>> http://docs.openstack.org/infra/publications/zuul/#%2818%29
>>
>> Click in the browser window to advance the simulation.
>>
>> Thank you,
>> Anita.
>>
> 
> Here is a presentation that uses the slide deck linked above, I
> recommend watching: https://www.youtube.com/watch?v=WDoSCGPiFDQ
> 
> Thank you,
> Anita.
> 

Three links in this edition of Learning to Debug the Gate:

The view that tracks our top bugs:
http://status.openstack.org/elastic-recheck/

The logstash queries that create the above view:
http://git.openstack.org/cgit/openstack-infra/elastic-recheck/tree/queries

Logstash itself, where you too can practice creating queries:
http://logstash.openstack.org

Note: in logstash the query is the transferable piece of information.
Filters can help you create a query, they do not populate a query. The
information that is in the query bar is what is important here.

Practice making some queries of your own.

Thanks for reading,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] manila-api failure in liberty

2015-11-05 Thread Igor Feoktistov
Hi Valeriy,

Thank you. Updated api-paste.ini resolved the issue with manila-api failure

Thanks,
Igor.

> Hello Igor,
> 
> Mentioned error indicates that file "etc/manila/api-paste.ini" was not
> updated with one from new version of Manila. This file has dependency on
> version of project and can differ from release to release. So, just copy
> liberty version of this file to "/etc/manila/api-paste.ini" and then run
> Liberty Manila API service.

> -- 
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomaryov at mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DefCore] Request for reviews and comments for 2016.01 DefCore (interop) guideline

2015-11-05 Thread Egle Sigler
Hello OpenStack Community,


The DefCore guideline for 2016.01 is now up for review, and we need your 
feedback. Please review and comment: https://review.openstack.org/#/c/239830/


At this time, we need feedback for capabilities that will become advisory in 
2016.01 and required in 2016.07:


"advisory": [
   "networks-l3-router",
   "networks-l2-CRUD",
   "networks-l3-CRUD",
   "networks-security-groups-CRUD",
   "compute-list-api-versions",
   "images-v2-remove",
   "images-v2-update",
   "images-v2-share",
   "images-v2-import",
   "images-v2-list",
   "images-v2-delete",
   "images-v2-get",
   "volumes-v2-create-delete",
   "volumes-v2-attach-detach",
   "volumes-v2-snapshot-create-delete",
   "volumes-v2-get",
   "volumes-v2-list",
   "volumes-v2-update",
   "volumes-v2-copy-image-to-volume",
   "volumes-v2-copy-volume-to-image",
   "volumes-v2-clone",
   "volumes-v2-qos",
   "volumes-v2-availability-zones",
   "volumes-v2-extensions",
   "volumes-v2-metadata",
   "volumes-v2-transfer",
   "volumes-v2-reserve",
   "volumes-v2-readonly",
   "identity-v3-api-discovery"
 ],


Each of these capabilities have Tempest tests associated with them. Please 
review and provide feedback. At this point, we can only remove advisory 
capabilities, and not others.


How to get involved in DefCore:

Join the mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee

Find us on IRC (chat.freenode.net): #openstack-defcore

Submit, review, comment: 
https://review.openstack.org/#/q/status:open+project:openstack/defcore,n,z

Join our weekly meetings on IRC: 
https://wiki.openstack.org/wiki/Governance/DefCoreCommittee#Meetings


New to DefCore?  Some pointers:

Intro to DefCore with heavy references to Dr. Who 
http://www.slideshare.net/markvoelker/defcore-the-interoperability-standard-for-openstack-53040869

DefCore 101 Tokyo presentation and slides: 
https://www.youtube.com/watch?v=MfUAuObSkK8  
http://www.slideshare.net/rhirschfeld/tokyo-defcore-presentation

Wiki: https://wiki.openstack.org/wiki/DefCore

Hacking file: https://github.com/openstack/defcore/blob/master/HACKING.rst


Please let me know if you have any questions!

Thank you,

Egle Sigler

DefCore Committee Co-Chair


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2015-11-05 13:18:13 -0800:
> Your assuming there are only 2 choices,
>  zk or db+rabbit. I'm claiming both hare suboptimal at present. a 3rd might 
> be needed. Though even with its flaws, the db+rabbit choice has a few 
> benefits too.
> 

Well, I'm assuming it is zk/etcd/consul, because while the java argument
is rather religious, the reality is all three are significantly different
from databases and message queues and thus will be "snowflakes". But yes,
I _am_ assuming that Zookeeper is a natural, logical, simple choice,
and that fact that it runs in a jvm is a poor reason to avoid it.

> You also seem to assert that to support large clouds, the default must be 
> something that can scale that large. While that would be nice, I don't think 
> its a requirement if its overly burdensome on deployers of non huge clouds.
> 

I think the current solution even scales poorly for medium sized
clouds. Only the tiniest of clouds with the fewest nodes can really
sustain all of that polling without incurring cost for that overhead
that would be better spent on serviceing users.

> I don't have metrics, but I would be surprised if most deployments today 
> (production + other) used 3 controllers with a full ha setup. I would guess 
> that the majority are single controller setups. With those, the overhead of 
> maintaining a whole dlm like zk seems like overkill. If db+rabbit would work 
> for that one case, that would be one less thing to have to setup for an op. 
> They already have to setup db+rabbit. Or even a clm plugin of some sort, that 
> won't scale, but would be very easy to deploy, and change out later when 
> needed would be very useful.
> 

We do have metrics:

http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf

Page 35, "How many physical compute nodes do OpenStack clouds have?"


10-99:42%
1-9:  36%
100-999:  15%
1000-: 7%

So for respondents to that survey, yes, "most" are running less than 100
nodes. However, by compute node count, if we extrapolate a bit:

There were 154 respondents so:

10-99 * 42% =640 - 6403 nodes
1-9 * 36% =  55 - 498 nodes
100-999 * 15% =  2300 - 23076 nodes
1000- * 7% = 1 - 107789 nodes

So in terms of the number of actual computers running OpenStack compute,
as an example, from the survey respondents, there are more computes
running in *one* of the clouds with more than 1000 nodes than there are
in *all* of the clouds with less than 10 nodes, and certainly more in
all of the clouds over 1000 nodes, than in all of the clouds with less
than 100 nodes.

What this means, to me, is that the investment in OpenStack should focus
on those with > 1000, since those orgs are definitely investing a lot
more today. We shouldn't make it _hard_ to do a tiny cloud, but I think
it's ok to make the tiny cloud less efficient if it means we can grow
it into a monster cloud at any point and we continue to garner support
from orgs who need to build large scale clouds.

(I realize I'm biased because I want to build a cloud with more than
1000 nodes ;)

> etcd is starting to show up in a lot of other projects, and so it may be at 
> sites already. being able to support it may be less of a burden to operators 
> then zk in some cases.
> 

Sure, just like some shops already have postgres and in theory you can
still run OpenStack on postgres. But the testing level for postgres
support is so abyssmal that I'd be surprised if anybody was actually
_choosing_ to do this. I can see this going the same way, where we give
everyone a choice, but then end up with almost nobody using any
alternative choices because the community has only rallied around the
one dominat choice.

> If your cloud grows to the point where the dlm choice really matters for 
> scalability/correctness, then you probably have enough staff members to deal 
> with adding in zk, and that's probably the right choice.
> 

If your cloud is 40 compute nodes, and three nines (which, lets face
it, thats the availability profile of a cloud with one controller), we
can just throw Zookeeper up untuned and satisfy the needs. Why would we
want to put up a custom homegrown db+mq solution and then force a change
later on if the cloud grows? A single code path seems a lot better than
multiple code paths, some of which are not really well tested.

> You can have multiple suggested things in addition to one default. Default to 
> the thing that makes the most sense in the common most deployments, and make 
> specific recommendations for certain scenarios. like, "if greater then 100 
> nodes, we strongly recommend using zk" or something to that effect.
> 

Choices are not free either. Just edit that statement there: "We
strongly recommend using zk." Nothing about ZK, etcd, or consul,
invalidates running on a small cloud. In many ways it makes things
simpler, since the user doesn't have to decide on a DLM, but instead
just installs the thing we recommend.

___

Re: [openstack-dev] [Ceilometer]:Subscribe and Publish Notification frame work in Ceilometer !

2015-11-05 Thread gord chung



On 05/11/2015 5:11 AM, Raghunath D wrote:

Hi Pradeep,

Presently we are looking for a monitoring service.Using monitoring 
service user's/application's
will subscribe for few notification's/events from openstack 
infrastructure and monitoring service

will publish these notification to user's/application's.

We are exploring Ceilometer for this purpose.We came across below blue 
print which is similar to our requirement.


 https://blueprints.launchpad.net/ceilometer/+spec/declarative-notifications.


i'm not exactly clear on what you are trying to achieve. that said, the 
basic premise of the above blueprint is that if serviceX (nova, neutron, 
etc...) starts publishing a new notification with a metric of interest, 
Ceilometer can be easily configured to capture said metric by adding a 
metric definition to a definition file[1] or a custom definition 
file[2]. the same can be done for events[3].




We have few queries on declarative-notifications frame work,could you 
please help us in addressing them:


1.We are looking for an API for Subcribing/Publishing notification.Do 
this frame work exposes any such API,if yes could you

   please provide us API doc or spec how to use it.
2.If the frame work doesn't have such API,does any of the development 
group is working in this area.
3.Please suggest what would be the best place in ceilometer 
notification frame work(publisher/dispatcher/..)

   to implement the Subscribe and Publish API.


from what is described, it seems like you'd like Ceilometer to capture a 
notification and republish it rather than stored in a Ceilometer 
supported storage driver (ie Gnocchi, ElasticSearch, SQL, etc...). 
currently, the only way to do this is to not enable a collector service. 
doing so, the Event/Sample will be published to a message queue 
(default) which you can configure your service to pull from. currently, 
i don't believe oslo.messaging supports pub/sub work flow. 
alternatively, you can use one of the other publishers[4]. the kafka 
publisher should allow you to do a pub/sub type workflow. i know RAX has 
atom hopper[5] which uses atom feeds to support pub/sub functionality. 
there was discussions on adding support for this but no work has been 
done on it. feel free to propose it if you feel it's worthwhile.


[1] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/meter/data/meters.yaml
[2] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/meter/notifications.py#L31
[3] 
https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/event_definitions.yaml
[4] 
http://docs.openstack.org/admin-guide-cloud/telemetry-data-retrieval.html#publishers

[5] http://atomhopper.org/

cheers,

--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Match type checking from oslo.config.

2015-11-05 Thread Sofer Athlan-Guyot
Hunter Haugen  writes:

>> Ouha!  I didn't know that property could have parent class defined.
>> This is nice.  Does it work also for parameter ?
>
> I haven't tried, but property is just a subclass of parameter so
> truthy could probably be made a parameter then become a parent of
> either a property or a parameter.

I will make a test tomorrow and report back how it goes, but you're
right, it should be ok.

>
>>
>> The NetScalerTruthy is more or less what would be needed for thruthy stuff.
>>
>> On my side I came up with this solution (for different stuff, but the
>> same principle could be used here as well):
>>
>> https://review.openstack.org/#/c/238954/10/lib/puppet_x/keystone/type/read_only.rb
>>
>> And I call it like that:
>>
>>   newproperty(:id) do
>> include PuppetX::Keystone::Type::ReadOnly
>>   end
>>
>> I was thinking of extending this scheme to have needed types (Boolean,
>> ...):
>>
>>   newproperty(:truth) do
>> include PuppetX::Openstack::Type::Boolean
>>   end
>>
>> Your solution in NetScalerTruthy is nice, integrated with puppet, but
>> require a function call.
>
> The function call is to a) pass documentation inline (since I assume
> every attribute has different documentation so didn't want to hardcode
> it in the truthy class), and b) pass the default truthy/falsy values
> that should be exposed to the provider (ie, allow you to cast all
> truthy values to `"enable"` and `"disable"` instead of only supporting
> `true` and `false`.
>
> The truthy class could obviously be implemented such that if no block
> is passed to the attribute then the method is automatically called
> with default values, then you wouldn't even need the `include` mixin.

That's look like a perfect interface.  I'm going to try this on some
code.  I will report here tomorrow, hopefully in a small review :)

Thanks again for those great insights.

>>
>> My "solution" require no function call unless you have to pass
>> parameters. If you have to pass parameter, the interface I used is a
>> preset function.  Here is an example:
>>
>> https://review.openstack.org/#/c/239434/8/lib/puppet_x/keystone/type/required.rb
>>
>> and you use it like this:
>>
>>   newparam(:type) do
>> isnamevar
>> def required_custom_message
>>   'Not specifying type parameter in Keystone_endpoint is a bug. ' \
>> 'See bug https://bugs.launchpad.net/puppet-keystone/+bug/1506996 '
>> \
>> "and https://review.openstack.org/#/c/238954/ for more
>> information.\n"
>> end
>> include PuppetX::Keystone::Type::Required
>>   end
>>
>> So, modulo you can have parameter with parent, both solutions could be
>> used.  Which one will it be:
>>  - one solution (NetScalerTruthy) is based on inheritance, mine on
>> composition.
>>  - you have a function call to make with NetScalerTruthy no matter what;
>>  - you have to define function to pass parameter with my solution (but
>>that shouldn't be required very often)
>>
>> I tend to prefer my resulting syntax, but that's really me ... I may be
>> biased.
>>
>> What do you think ?
>>
>>>
>>> On Mon, Nov 2, 2015 at 12:06 PM Cody Herriges 
>>> wrote:
>>>
>>> Sofer Athlan-Guyot wrote:
>>> > Hi,
>>> >
>>> > The idea would be to have some of the types defined oslo config
>>> >
>>>
>>> http://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/types.
>>> py
>>> > ported to puppet type. Those that looks like good candidates
>>> are:
>>> > - Boolean;
>>> > - IPAddress;
>>> > and in a lesser extend:
>>> > - Integer;
>>> > - Float;
>>> >
>>> > For instance in puppet type requiring a Boolean, we may test
>>> > "/[tT]rue|[fF]alse/", but the real thing is :
>>> >
>>> > TRUE_VALUES = ['true', '1', 'on', 'yes']
>>> > FALSE_VALUES = ['false', '0', 'off', 'no']
>>> >
>>>
>>> Good idea. I'd only add that we should convert 'true' and 'false'
>>> to
>>> real booleans for Puppet's purposes since the Puppet language is
>>> now typed.
>>>
>>> --
>>> Cody
>>>
>>> ___
>>> ___
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> --
>> Sofer Athlan-Guyot
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/o

Re: [openstack-dev] [Ceilometer] [Bug#1497073]The return sample body of sample-list is different when use -m and not

2015-11-05 Thread gord chung
i'm sort of torn on this item. there's a general feeling that regarding 
api, nothing should be dropped so i'm hesitant to actually deprecate it. 
i think changing the data also is very dangerous when it comes to 
compatibility (even though keeping it increases inconsistency).


maybe the better solution is to document that these are different APIs 
and will return different results.


On 05/11/2015 2:30 AM, Lin Juan IX Xia wrote:

Hi,

Here is an open bug : https://bugs.launchpad.net/ceilometer/+bug/1497073

Is it a bug or not?

For the command "ceilometer sample-list --meter cpu", it calls 
"/v2/meter" API and return the OldSample objects
which return body is different from "ceilometer sample-list --query 
'meter=cpu'".
To fix this inconformity, we can deprecate the command using -m or fix 
it to return the same body as command sample-list

Best Regards,
Xia Linjuan




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Dolph Mathews
On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann  wrote:

> Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > > > Can people help me work through the right set of tools for this use
> case
> > > > (has come up from several Operators) and map out a plan to implement
> it:
> > > >
> > > > Large cloud with many users coming from multiple Federation sources
> has
> > > > a policy of providing a minimal setup for each user upon first visit
> to
> > > > the cloud:  Create a project for the user with a minimal quota, and
> > > > provide them a role assignment.
> > > >
> > > > Here are the gaps, as I see it:
> > > >
> > > > 1.  Keystone provides a notification that a user has logged in, but
> > > > there is nothing capable of executing on this notification at the
> > > > moment.  Only Ceilometer listens to Keystone notifications.
> > > >
> > > > 2.  Keystone does not have a workflow engine, and should not be
> > > > auto-creating projects.  This is something that should be performed
> via
> > > > a Heat template, and Keystone does not know about Heat, nor should
> it.
> > > >
> > > > 3.  The Mapping code is pretty static; it assumes a user entry or a
> > > > group entry in identity when creating a role assignment, and neither
> > > > will exist.
> > > >
> > > > We can assume a special domain for Federated users to have per-user
> > > > projects.
> > > >
> > > > So; lets assume a Heat Template that does the following:
> > > >
> > > > 1. Creates a user in the per-user-projects domain
> > > > 2. Assigns a role to the Federated user in that project
> > > > 3. Sets the minimal quota for the user
> > > > 4. Somehow notifies the user that the project has been set up.
> > > >
> > > > This last probably assumes an email address from the Federated
> > > > assertion.  Otherwise, the user hits Horizon, gets a "not
> authenticated
> > > > for any projects" error, and is stumped.
> > > >
> > > > How is quota assignment done in the other projects now?  What happens
> > > > when a project is created in Keystone?  Does that information gets
> > > > transferred to the other services, and, if so, how?  Do most people
> use
> > > > a custom provisioning tool for this workflow?
> > > >
> > >
> > > I know at Dreamhost we built some custom integration that was triggered
> > > when someone turned on the Dreamcompute service in their account in our
> > > existing user management system. That integration created the account
> in
> > > keystone, set up a default network in neutron, etc. I've long thought
> we
> > > needed a "new tenant creation" service of some sort, that sits outside
> > > of our existing services and pokes them to do something when a new
> > > tenant is established. Using heat as the implementation makes sense,
> for
> > > things that heat can control, but we don't want keystone to depend on
> > > heat and we don't want to bake such a specialized feature into heat
> > > itself.
> > >
> >
> > I agree, an automation piece that is built-in and easy to add to
> > OpenStack would be great.
> >
> > I do not agree that it should be Heat. Heat is for managing stacks that
> > live on and change over time and thus need the complexity of the graph
> > model Heat presents.
> >
> > I'd actually say that Mistral or Ansible are better choices for this. A
> > service which listens to the notification bus and triggered a workflow
> > defined somewhere in either Ansible playbooks or Mistral's workflow
> > language would simply run through the "skel" workflow for each user.
> >
> > The actual workflow would probably almost always be somewhat site
> > specific, but it would make sense for Keystone to include a few basic
> ones
> > as "contrib" elements. For instance, the "notify the user" piece would
> > likely be simplest if you just let the workflow tool send an email. But
> > if your cloud has Zaqar, you may want to use that as well or instead.
> >
> > Adding Mistral here to see if they have some thoughts on how this
> > might work.
> >
> > BTW, if this does form into a new project, I suggest naming it
> > Skeleton[1]
>
> Following the pattern of Kite's naming, I think a Dirigible is a
> better way to get users into the cloud. :-)
>

lol +1

Is this use case specifically for keystone-to-keystone, or for federation
in general?

As an outcome of the Vancouver summit, we had a use case for mirroring a
federated user's project ID from the identity provider cloud to the service
provider cloud. The goal would be that a user can burst into a second cloud
and immediately receive a token scoped to the same project ID that they're
already familiar with (which implies a role assignment of some sort; for
example, member). That would have to be done in real time though, not by a
secondary service.

And with shadow users, we're looking at creating an identity (basically,
nothing but a user_id) i

Re: [openstack-dev] [manila] manila-api failure in liberty

2015-11-05 Thread Valeriy Ponomaryov
Hello Igor,

Mentioned error indicates that file "etc/manila/api-paste.ini" was not
updated with one from new version of Manila. This file has dependency on
version of project and can differ from release to release. So, just copy
liberty version of this file to "/etc/manila/api-paste.ini" and then run
Liberty Manila API service.

-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Gerrit maintenance for project renames 2015-11-06 (tomorrow) at 20:00 UTC

2015-11-05 Thread Jeremy Stanley
Sorry for the short notice, but the Infra team will be taking Gerrit
offline briefly from 20:00 to 20:15 tomorrow/Friday, November 6 to
rename the following projects:

openstack-infra/puppet-openstack-health ->
openstack-infra/puppet-openstack_health
openstack/akanda-rug -> openstack/astara
openstack/akanda-appliance -> openstack/astara-appliance
openstack/akanda-horizon -> openstack/astara-horizon
openstack/akanda-neutron -> openstack/astara-neutron
openstack/akanda -> openstack-attic/akanda
openstack/akanda-appliance-builder ->
openstack-attic/akanda-appliance-builder

And we'll move these lingering projects which missed the first boat
from StackForgeville to OpenStack City:

stackforge/networking-bigswitch ->
openstack/networking-bigswitch
stackforge/compass-install -> openstack/compass-install

Also if change 237936 gets corrected in the next 18 hours or so, we
may rename:

openstack/networking-bagpipe-l2 -> openstack/networking-bagpipe

As always, feel free to follow up to this message or pop into
#openstack-infra on Freenode if you have any questions/concerns.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Logging - filling up my tiny SSDs

2015-11-05 Thread Sean M. Collins
On Wed, Nov 04, 2015 at 07:25:24AM EST, Sean Dague wrote:
> On 11/02/2015 10:36 AM, Sean M. Collins wrote:
> > On Sun, Nov 01, 2015 at 10:12:10PM EST, Davanum Srinivas wrote:
> >> Sean,
> >>
> >> I typically switch off screen and am able to redirect logs to a specified
> >> directory. Does this help?
> >>
> >> USE_SCREEN=False
> >> LOGDIR=/opt/stack/logs/
> > 
> > It's not that I want to disable screen. I want screen to run, and not
> > log the output to files, since I have a tiny 16GB ssd card on these NUCs
> > and it fills it up if I leave it running for a week or so. 
> 
> If you right a patch, I think it's fine to include, however it's a
> pretty edge case. Super small disks (I didn't even realize they made SSD
> that small, I thought 120 was about the floor), and running devstack for
> long times without rebuild.

I'll make sure to name the variable appropriately. Some ideas:

SEAN_COLLINS_CREEPY_BASEMENT_DEVSTACK_LAB
SEANS_DISCOUNT_DEVSTACK_EMPORIUM
ANT_SIZED_SSD

;)



-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > > Can people help me work through the right set of tools for this use case 
> > > (has come up from several Operators) and map out a plan to implement it:
> > > 
> > > Large cloud with many users coming from multiple Federation sources has 
> > > a policy of providing a minimal setup for each user upon first visit to 
> > > the cloud:  Create a project for the user with a minimal quota, and 
> > > provide them a role assignment.
> > > 
> > > Here are the gaps, as I see it:
> > > 
> > > 1.  Keystone provides a notification that a user has logged in, but 
> > > there is nothing capable of executing on this notification at the 
> > > moment.  Only Ceilometer listens to Keystone notifications.
> > > 
> > > 2.  Keystone does not have a workflow engine, and should not be 
> > > auto-creating projects.  This is something that should be performed via 
> > > a Heat template, and Keystone does not know about Heat, nor should it.
> > > 
> > > 3.  The Mapping code is pretty static; it assumes a user entry or a 
> > > group entry in identity when creating a role assignment, and neither 
> > > will exist.
> > > 
> > > We can assume a special domain for Federated users to have per-user 
> > > projects.
> > > 
> > > So; lets assume a Heat Template that does the following:
> > > 
> > > 1. Creates a user in the per-user-projects domain
> > > 2. Assigns a role to the Federated user in that project
> > > 3. Sets the minimal quota for the user
> > > 4. Somehow notifies the user that the project has been set up.
> > > 
> > > This last probably assumes an email address from the Federated 
> > > assertion.  Otherwise, the user hits Horizon, gets a "not authenticated 
> > > for any projects" error, and is stumped.
> > > 
> > > How is quota assignment done in the other projects now?  What happens 
> > > when a project is created in Keystone?  Does that information gets 
> > > transferred to the other services, and, if so, how?  Do most people use 
> > > a custom provisioning tool for this workflow?
> > > 
> > 
> > I know at Dreamhost we built some custom integration that was triggered
> > when someone turned on the Dreamcompute service in their account in our
> > existing user management system. That integration created the account in
> > keystone, set up a default network in neutron, etc. I've long thought we
> > needed a "new tenant creation" service of some sort, that sits outside
> > of our existing services and pokes them to do something when a new
> > tenant is established. Using heat as the implementation makes sense, for
> > things that heat can control, but we don't want keystone to depend on
> > heat and we don't want to bake such a specialized feature into heat
> > itself.
> > 
> 
> I agree, an automation piece that is built-in and easy to add to
> OpenStack would be great.
> 
> I do not agree that it should be Heat. Heat is for managing stacks that
> live on and change over time and thus need the complexity of the graph
> model Heat presents.
> 
> I'd actually say that Mistral or Ansible are better choices for this. A
> service which listens to the notification bus and triggered a workflow
> defined somewhere in either Ansible playbooks or Mistral's workflow
> language would simply run through the "skel" workflow for each user.
> 
> The actual workflow would probably almost always be somewhat site
> specific, but it would make sense for Keystone to include a few basic ones
> as "contrib" elements. For instance, the "notify the user" piece would
> likely be simplest if you just let the workflow tool send an email. But
> if your cloud has Zaqar, you may want to use that as well or instead.
> 
> Adding Mistral here to see if they have some thoughts on how this
> might work.
> 
> BTW, if this does form into a new project, I suggest naming it
> Skeleton[1]

Following the pattern of Kite's naming, I think a Dirigible is a
better way to get users into the cloud. :-)

Doug

> 
> [1] https://goo.gl/photos/EML6EPKeqRXioWfd8 (that was my front yard..)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Doug Hellmann
Excerpts from Adam Young's message of 2015-11-05 15:14:03 -0500:
> On 11/05/2015 01:09 PM, Clint Byrum wrote:
> > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> >> Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> >>> Can people help me work through the right set of tools for this use case
> >>> (has come up from several Operators) and map out a plan to implement it:
> >>>
> >>> Large cloud with many users coming from multiple Federation sources has
> >>> a policy of providing a minimal setup for each user upon first visit to
> >>> the cloud:  Create a project for the user with a minimal quota, and
> >>> provide them a role assignment.
> >>>
> >>> Here are the gaps, as I see it:
> >>>
> >>> 1.  Keystone provides a notification that a user has logged in, but
> >>> there is nothing capable of executing on this notification at the
> >>> moment.  Only Ceilometer listens to Keystone notifications.
> >>>
> >>> 2.  Keystone does not have a workflow engine, and should not be
> >>> auto-creating projects.  This is something that should be performed via
> >>> a Heat template, and Keystone does not know about Heat, nor should it.
> >>>
> >>> 3.  The Mapping code is pretty static; it assumes a user entry or a
> >>> group entry in identity when creating a role assignment, and neither
> >>> will exist.
> >>>
> >>> We can assume a special domain for Federated users to have per-user
> >>> projects.
> >>>
> >>> So; lets assume a Heat Template that does the following:
> >>>
> >>> 1. Creates a user in the per-user-projects domain
> >>> 2. Assigns a role to the Federated user in that project
> >>> 3. Sets the minimal quota for the user
> >>> 4. Somehow notifies the user that the project has been set up.
> >>>
> >>> This last probably assumes an email address from the Federated
> >>> assertion.  Otherwise, the user hits Horizon, gets a "not authenticated
> >>> for any projects" error, and is stumped.
> >>>
> >>> How is quota assignment done in the other projects now?  What happens
> >>> when a project is created in Keystone?  Does that information gets
> >>> transferred to the other services, and, if so, how?  Do most people use
> >>> a custom provisioning tool for this workflow?
> >>>
> >> I know at Dreamhost we built some custom integration that was triggered
> >> when someone turned on the Dreamcompute service in their account in our
> >> existing user management system. That integration created the account in
> >> keystone, set up a default network in neutron, etc. I've long thought we
> >> needed a "new tenant creation" service of some sort, that sits outside
> >> of our existing services and pokes them to do something when a new
> >> tenant is established. Using heat as the implementation makes sense, for
> >> things that heat can control, but we don't want keystone to depend on
> >> heat and we don't want to bake such a specialized feature into heat
> >> itself.
> >>
> > I agree, an automation piece that is built-in and easy to add to
> > OpenStack would be great.
> >
> > I do not agree that it should be Heat. Heat is for managing stacks that
> > live on and change over time and thus need the complexity of the graph
> > model Heat presents.
> It would be a simpler template than most, but I'm trying to avoid adding 
> additional complexity here.
> 
> >
> > I'd actually say that Mistral or Ansible are better choices for this. A
> > service which listens to the notification bus and triggered a workflow
> > defined somewhere in either Ansible playbooks or Mistral's workflow
> > language would simply run through the "skel" workflow for each user.
> >
> > The actual workflow would probably almost always be somewhat site
> > specific, but it would make sense for Keystone to include a few basic ones
> > as "contrib" elements. For instance, the "notify the user" piece would
> > likely be simplest if you just let the workflow tool send an email. But
> > if your cloud has Zaqar, you may want to use that as well or instead.
> >
> > Adding Mistral here to see if they have some thoughts on how this
> > might work.
> >
> > BTW, if this does form into a new project, I suggest naming it
> > Skeleton[1]
> 
> I really do not want it to be a new project, but rather I think it 
> should be a mapping of the capabilities of the existing projects.
> 
> 
> We had discussed Mistral in Vancouver as the listener.  Would it make 
> sense to have Keystone notify Mistral, and then Mistral kick off the 
> workflow?

Mistral would need to catch the event and take action on behalf of the
new tenant with some sort of admin rights. Is that possible now?

> 
> The one issue I waffle on is whether Keystone itself should be 
> responsible for the Keystone-specific stuff, as part of the initial log 
> in, and thus give an immediate response to the user upon first 
> authentication.

For the federation case that may make sense. For setting up a new
tenant or user, it may not.

> 
> 
> Alternatively, we could provide a fe

[openstack-dev] [neutron] Ether pad on O(n)/Linear Execution Time/Hyper-Scale

2015-11-05 Thread Ryan Moats


I promised during the DVR IRC meeting yesterday to re-run the L3 agent
experiments that I've been doing that have led to performance based patches
over the last two months and to provide an etherpad with both the results
and the methodology.

The etherpad is up for folks to review at [1].  While writing this, I
decided to no longer call this work "O(n)" or "Linear Execution Time" but
rather "Hyper-Scale" (because that sounds so much more cool (smile)).  Most
of what is there is methodology - while I've got some results from
yesterday, but I need to dig down some more, so I'll be updating that part
either tomorrow or early next week.

One thought that Kyle and I were discussing was should the "how" part go
into a devref, so that we aren't dependent on an etherpad.  I'm thinking
it's not a bad idea, but I'm wondering if it should only be in neutron or
if it should be elsewhere (like user docs that go along with code that
would implement [2] in oslo...

Thoughts and comments are welcome,
Ryan Moats (regXboi)

[1] https://etherpad.openstack.org/p/hyper-scale
[2] https://bugs.launchpad.net/neutron/+bug/1512864
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Fox, Kevin M
Your assuming there are only 2 choices,
 zk or db+rabbit. I'm claiming both hare suboptimal at present. a 3rd might be 
needed. Though even with its flaws, the db+rabbit choice has a few benefits too.

You also seem to assert that to support large clouds, the default must be 
something that can scale that large. While that would be nice, I don't think 
its a requirement if its overly burdensome on deployers of non huge clouds.

I don't have metrics, but I would be surprised if most deployments today 
(production + other) used 3 controllers with a full ha setup. I would guess 
that the majority are single controller setups. With those, the overhead of 
maintaining a whole dlm like zk seems like overkill. If db+rabbit would work 
for that one case, that would be one less thing to have to setup for an op. 
They already have to setup db+rabbit. Or even a clm plugin of some sort, that 
won't scale, but would be very easy to deploy, and change out later when needed 
would be very useful.

etcd is starting to show up in a lot of other projects, and so it may be at 
sites already. being able to support it may be less of a burden to operators 
then zk in some cases.

If your cloud grows to the point where the dlm choice really matters for 
scalability/correctness, then you probably have enough staff members to deal 
with adding in zk, and that's probably the right choice.

You can have multiple suggested things in addition to one default. Default to 
the thing that makes the most sense in the common most deployments, and make 
specific recommendations for certain scenarios. like, "if greater then 100 
nodes, we strongly recommend using zk" or something to that effect.

Thanks,
Kevin



From: Clint Byrum [cl...@fewbar.com]
Sent: Thursday, November 05, 2015 11:44 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager  
discussion @ the summit

Excerpts from Fox, Kevin M's message of 2015-11-04 14:32:42 -0800:
> To clarify that statement a little more,
>
> Speaking only for myself as an op, I don't want to support yet one more 
> snowflake in a sea of snowflakes, that works differently then all the rest, 
> without a very good reason.
>
> Java has its own set of issues associated with the JVM. Care, and feeding 
> sorts of things. If we are to invest time/money/people in learning how to 
> properly maintain it, its easier to justify if its not just a one off for 
> just DLM,
>
> So I wouldn't go so far as to say we're vehemently opposed to java, just that 
> DLM on its own is probably not a strong enough feature all on its own to 
> justify requiring pulling in java. Its been only a very recent thing that you 
> could convince folks that DLM was needed at all. So either make java 
> optional, or find some other use cases that needs java badly enough that you 
> can make java a required component. I suspect some day searchlight might be 
> compelling enough for that, but not today.
>
> As for the default, the default should be good reference. if most sites would 
> run with etc or something else since java isn't needed, then don't default 
> zookeeper on.
>

There are a number of reasons, but the most important are:

* Resilience in the face of failures - The current database+MQ based
  solutions are all custom made and have unknown characteristics when
  there are network partitions and node failures.
* Scalability - The current database+MQ solutions rely on polling the
  database and/or sending lots of heartbeat messages or even using the
  database to store heartbeat transactions. This scales fine for tiny
  clusters, but when every new node adds more churn to the MQ and
  database, this will (and has been observed to) be intractable.
* Tech debt - OpenStack is inventing lock solutions and then maintaining
  them. And service discovery solutions, and then maintaining them.
  Wouldn't you rather have better upgrade stories, more stability, more
  scale, and more featuers?

If those aren't compelling enough reasons to deploy a mature java service
like Zookeeper, I don't know what would be. But I do think using the
abstraction layer of tooz will at least allow us to move forward without
having to convince everybody everywhere that this is actually just the
path of least resistance.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Shraddha Pandhe
Hi,

I agree with all of you about the REST Apis.

As I said before, I had to bring up the idea of JSON blob because based on
previous discussions, it looked like neutron community was not willing to
enhance the schemas for different ipam dbs. Entire rationale behind
pluggable IPAM is to provide flexibility. So, community should be open to
ideas for enhancing the schema to incorporate more information in the db
tables. I would be extremely happy if use cases for different companies are
considered and schema is enhanced to include specific columns in db
 schemas instead of a column with random JSON blob.

Lets pick up subnets db table for example. We have some use cases where it
would be great if following information is associated with the subnet db
table

1. Rack switch info
2. Backplane info
3. DHCP ip helpers
4. Option to tag allocation pools inside subnets
5. Multiple gateway addresses

We also want to store some information about the backplanes locally, so a
different table might be useful.

In a way, this information is not specific to our company. Its generic
information which ought to go with the subnets. Different companies can use
this information differently in their IPAM drivers. But, the information
needs to be made available to justify the flexibility of ipam

In Yahoo! OpenStack is still not the source of truth for this kind of
information and database limitation is one of the reasons. I would prefer
to avoid having our own database to make sure that our use-cases are always
shared with the community.








On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery  wrote:

> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:
>
>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>
>>> Hi Salvatore,
>>>
>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>> make IPAM much more powerful. Some other projects already do things like
>>> this.
>>>
>>
>> :( Actually, though "powerful" it also leads to implementation details
>> leaking directly out of the public REST API. I'm very negative on this and
>> would prefer an actual codified REST API that can be relied on regardless
>> of backend driver or implementation.
>>
>
> I agree with Jay here. We've had people propose similar things in Neutron
> before, and I've been against them. The entire point of the Neutron REST
> API is to not leak these details out. It dampens the strength of the
> logical model, and it tends to have users become reliant on backend
> implementations.
>
>
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>> there that we think is important for us.
>>>
>>
>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>> structured, not a Wild West free-for-all. The biggest problem with using
>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>> ability to evolve the API in a structured, versioned way. Instead of
>> evolving the API using microversions, instead every vendor just jams
>> whatever they feel like into the JSON blob over time. There's no way for
>> clients to know what the server will return at any given time.
>>
>> Achieving consensus on a REST API that meets the needs of a variety of
>> backend implementations is *hard work*, yes, but it's what we need to do if
>> we are to have APIs that are viewed in the industry as stable,
>> discoverable, and reliably useful.
>>
>
> ++, this is the correct way forward.
>
> Thanks,
> Kyle
>
>
>>
>> Best,
>> -jay
>>
>> Best,
>> -jay
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>
>>>
>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>>> mailto:salv.orla...@gmail.com>> wrote:
>>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for
>>> versioning or portability purposes.
>>> The parameters that should end up in such blob are typically
>>> specific for the target IPAM driver (to an extent they might even
>>> identify a specific driver to use), and therefore an API consumer
>>> who knows what backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability
>>> and not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more
>>> input on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension
>>> are not a solution - assuming your granularity level is the
>>> allocation pool; indeed allocation pools are not first-class neutron
>>> resources, and it is not therefore possible to have APIs which
>>> associate vendor specific properties to allocation pools.
>>>
>>> Salvatore
>>>
>>> On 4 November 2015 at 21:46, Shraddha Pandhe
>>> mailto:spandhe.openst...@gmail.com>>
>>> wrote:
>>>
>>> Hi fol

Re: [openstack-dev] DevStack errors...

2015-11-05 Thread Thales
Neil Jerram wrote:"When you say 'on Ubuntu 14.04', are we talking a completely 
fresh install with nothing else on it?  That's the most reliable way to run 
DevStack - people normally create a fresh disposable VM for this kind of work."

   -- I finally got it running!   I did what you said, and created a VM.   I 
basically followed this guys video tutorial.  The only difference is I used the 
stable/liberty instead of the stable/icehouse (which I guess no longer exists). 
  It is, however, *very* slow on my machine, with 4 giga bytes and 30 GB HDD.  
   I did have some problems getting VirtualBox working (I know others are using 
VMware) with their "guest additions", because none of the standard instructions 
worked.    Some user on askubuntu.com here had the answer.  This gave me the 
bigger 
screen.http://askubuntu.com/questions/451805/screen-resolution-problem-with-ubuntu-14-04-and-virtualbox


  The answer given by the guy named "Chip" and then the reply to him by "Snark" 
did the trick.   
The tutorial I used:https://www.youtube.com/watch?v=zoi8WpGwrXM


  I supplied details here in case anyone else has the same difficulties.
   Thanks for the help!
Regards,...John On Tuesday, November 3, 2015 3:35 AM, Neil Jerram 
 wrote:
   

  On 02/11/15 23:56, Thales wrote:

I'm trying to get DevStack to work, but am getting errors.  Is this a good list 
to ask questions for this?  I can't seem to get answers anywhere I look.   I 
tried the openstack list, but it kind of moves slow.
Thanks for any help.
Regards, John

In case it helps, I had no problem using DevStack's stable/liberty branch 
yesterday.  If you don't specifically need master, you might try that too:

  # Clone the DevStack repository.
  git clone https://git.openstack.org/openstack-dev/devstack

  # Use the stable/liberty branch.
  cd devstack
  git checkout stable/liberty

  ...

I also just looked again at your report on openstack@.  Were you using Python 
2.7?

I expect you'll have seen discussions like 
http://stackoverflow.com/questions/23176697/importerror-no-module-named-io-in-ubuntu-14-04.
  It's not obvious to me how those can be relevant, though, as they seem to 
involve corruption of an existing virtualenv, whereas DevStack I believe 
creates a virtualenv from scratch.

When you say 'on Ubuntu 14.04', are we talking a completely fresh install with 
nothing else on it?  That's the most reliable way to run DevStack - people 
normally create a fresh disposable VM for this kind of work.

Regards,
    Neil



  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Adam Young

On 11/05/2015 01:09 PM, Clint Byrum wrote:

Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:

Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:

Can people help me work through the right set of tools for this use case
(has come up from several Operators) and map out a plan to implement it:

Large cloud with many users coming from multiple Federation sources has
a policy of providing a minimal setup for each user upon first visit to
the cloud:  Create a project for the user with a minimal quota, and
provide them a role assignment.

Here are the gaps, as I see it:

1.  Keystone provides a notification that a user has logged in, but
there is nothing capable of executing on this notification at the
moment.  Only Ceilometer listens to Keystone notifications.

2.  Keystone does not have a workflow engine, and should not be
auto-creating projects.  This is something that should be performed via
a Heat template, and Keystone does not know about Heat, nor should it.

3.  The Mapping code is pretty static; it assumes a user entry or a
group entry in identity when creating a role assignment, and neither
will exist.

We can assume a special domain for Federated users to have per-user
projects.

So; lets assume a Heat Template that does the following:

1. Creates a user in the per-user-projects domain
2. Assigns a role to the Federated user in that project
3. Sets the minimal quota for the user
4. Somehow notifies the user that the project has been set up.

This last probably assumes an email address from the Federated
assertion.  Otherwise, the user hits Horizon, gets a "not authenticated
for any projects" error, and is stumped.

How is quota assignment done in the other projects now?  What happens
when a project is created in Keystone?  Does that information gets
transferred to the other services, and, if so, how?  Do most people use
a custom provisioning tool for this workflow?


I know at Dreamhost we built some custom integration that was triggered
when someone turned on the Dreamcompute service in their account in our
existing user management system. That integration created the account in
keystone, set up a default network in neutron, etc. I've long thought we
needed a "new tenant creation" service of some sort, that sits outside
of our existing services and pokes them to do something when a new
tenant is established. Using heat as the implementation makes sense, for
things that heat can control, but we don't want keystone to depend on
heat and we don't want to bake such a specialized feature into heat
itself.


I agree, an automation piece that is built-in and easy to add to
OpenStack would be great.

I do not agree that it should be Heat. Heat is for managing stacks that
live on and change over time and thus need the complexity of the graph
model Heat presents.
It would be a simpler template than most, but I'm trying to avoid adding 
additional complexity here.





I'd actually say that Mistral or Ansible are better choices for this. A
service which listens to the notification bus and triggered a workflow
defined somewhere in either Ansible playbooks or Mistral's workflow
language would simply run through the "skel" workflow for each user.

The actual workflow would probably almost always be somewhat site
specific, but it would make sense for Keystone to include a few basic ones
as "contrib" elements. For instance, the "notify the user" piece would
likely be simplest if you just let the workflow tool send an email. But
if your cloud has Zaqar, you may want to use that as well or instead.

Adding Mistral here to see if they have some thoughts on how this
might work.

BTW, if this does form into a new project, I suggest naming it
Skeleton[1]


I really do not want it to be a new project, but rather I think it 
should be a mapping of the capabilities of the existing projects.



We had discussed Mistral in Vancouver as the listener.  Would it make 
sense to have Keystone notify Mistral, and then Mistral kick off the 
workflow?


The one issue I waffle on is whether Keystone itself should be 
responsible for the Keystone-specific stuff, as part of the initial log 
in, and thus give an immediate response to the user upon first 
authentication.



Alternatively, we could provide a feedback in Horizon etc letting the 
user know that the process is underway, and even letting them add an 
email address for the callback if one cannot be deduced from the WebUI.



Would it male more sense to have this a Horizon-driven workflow, using 
an unscoped Federation token?




[1] https://goo.gl/photos/EML6EPKeqRXioWfd8 (that was my front yard..)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_

Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread gord chung
my understanding is that if you are calling stop()/wait() your intention 
is to shut down the listener. if you intend on keeping an active 
consumer on the queue, you shouldn't be calling either stop() or wait(), 
just start.


On 05/11/2015 2:07 PM, Nader Lahouti wrote:


Thanks for the pointer, I'll look into it. But one question, by 
calling stop() and then wait(), does it mean the application has to 
call start() again after the wait()? to process more messages?


I am also using 
http://docs.openstack.org/developer/oslo.messaging/server.html for the 
RPC server

Does it mean there has to be stop() and then wait() there as well?


Thanks,
Nader.



On Thu, Nov 5, 2015 at 10:19 AM, gord chung > wrote:




On 05/11/2015 1:06 PM, Nader Lahouti wrote:

Hi Doug,

I have an app that listens to notifications and used the info
provided in

http://docs.openstack.org/developer/oslo.messaging/notification_listener.html


Basically I create
1. NotificationEndpoints(object):

https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
2. NotifcationListener(object):

https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
3. and call start() and  then wait()


the correct usage is to call stop() before wait()[1]. for
reference on how to use listeners, you can see Ceilometer[2]


[1]http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
[2]
https://github.com/openstack/ceilometer/blob/master/ceilometer/utils.py#L250

-- 
gord




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] Re: [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Chris Dent's message of 2015-11-05 00:08:16 -0800:

On Thu, 5 Nov 2015, Robert Collins wrote:


In the session we were told that zookeeper is already used in CI jobs
for ceilometer (was this wrong?) and thats why we figured it made a
sane default for devstack.

For clarity: What ceilometer (actually gnocchi) is doing is using tooz
in CI (gate-ceilometer-dsvm-integration). And for now it is using
redis as that was "simple".

Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
tooz for coordinating group partitioning in active-active HA setups
and shared locks. Again the standard deploy for that has been to use
redis because of availability. It's fairly understood that zookeeper
would be more correct but there are packaging concerns.



Redis jettisons all consistency on partitions... It's really ugly:

https://aphyr.com/posts/307-call-me-maybe-redis-redux

 These results are catastrophic. In a partition which lasted for
 roughly 45% of the test, 45% of acknowledged writes were thrown
 away. To add insult to injury, Redis preserved all the failed writes
 in place of the successful ones.

So... yeah. I actually think it is dangerous to have Redis in tooz at
all. One partition and you have split brains, locks granted to multiple
places, and basically the pure chaos that you were trying to prevent by
using a lock in the first place. If you're using redis, the only sane
thing to do is to shut everything down when there's a partition (which
is not easy to detect!).


This is where it gets weird, redis, imho, is alot like openstack, alot 
of ways to tweak it, alot of operational modes and a few 
clustering/failover modes.


The one that that I think the above mentions is sentinel:

http://redis.io/topics/sentinel

But from my understanding the following is being created/evolving to 
make this better (to some degree):


http://redis.io/topics/cluster-tutorial

http://redis.io/topics/cluster-spec

Overall maybe we should deprecate the redis driver, and come back to it 
when clustering has been more proven out (afaik redis clustering is 
fairly new); that might be acceptable imho, if we as a community are 
willing to do this.




To contrast this with Zookeeper and Consul:

https://aphyr.com/posts/291-call-me-maybe-zookeeper
https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul

Even though etcd and consul ended up suffering from stale reads, they
added pieces to their API that allow fully consistent reads (presumably
suffering a performance penalty when doing so).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2015-11-04 14:32:42 -0800:
> To clarify that statement a little more,
> 
> Speaking only for myself as an op, I don't want to support yet one more 
> snowflake in a sea of snowflakes, that works differently then all the rest, 
> without a very good reason.
> 
> Java has its own set of issues associated with the JVM. Care, and feeding 
> sorts of things. If we are to invest time/money/people in learning how to 
> properly maintain it, its easier to justify if its not just a one off for 
> just DLM,
> 
> So I wouldn't go so far as to say we're vehemently opposed to java, just that 
> DLM on its own is probably not a strong enough feature all on its own to 
> justify requiring pulling in java. Its been only a very recent thing that you 
> could convince folks that DLM was needed at all. So either make java 
> optional, or find some other use cases that needs java badly enough that you 
> can make java a required component. I suspect some day searchlight might be 
> compelling enough for that, but not today.
> 
> As for the default, the default should be good reference. if most sites would 
> run with etc or something else since java isn't needed, then don't default 
> zookeeper on.
> 

There are a number of reasons, but the most important are:

* Resilience in the face of failures - The current database+MQ based
  solutions are all custom made and have unknown characteristics when
  there are network partitions and node failures.
* Scalability - The current database+MQ solutions rely on polling the
  database and/or sending lots of heartbeat messages or even using the
  database to store heartbeat transactions. This scales fine for tiny
  clusters, but when every new node adds more churn to the MQ and
  database, this will (and has been observed to) be intractable.
* Tech debt - OpenStack is inventing lock solutions and then maintaining
  them. And service discovery solutions, and then maintaining them.
  Wouldn't you rather have better upgrade stories, more stability, more
  scale, and more featuers?

If those aren't compelling enough reasons to deploy a mature java service
like Zookeeper, I don't know what would be. But I do think using the
abstraction layer of tooz will at least allow us to move forward without
having to convince everybody everywhere that this is actually just the
path of least resistance.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] Re: [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Clint Byrum
Excerpts from Chris Dent's message of 2015-11-05 00:08:16 -0800:
> On Thu, 5 Nov 2015, Robert Collins wrote:
> 
> > In the session we were told that zookeeper is already used in CI jobs
> > for ceilometer (was this wrong?) and thats why we figured it made a
> > sane default for devstack.
> 
> For clarity: What ceilometer (actually gnocchi) is doing is using tooz
> in CI (gate-ceilometer-dsvm-integration). And for now it is using
> redis as that was "simple".
> 
> Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
> tooz for coordinating group partitioning in active-active HA setups
> and shared locks. Again the standard deploy for that has been to use
> redis because of availability. It's fairly understood that zookeeper
> would be more correct but there are packaging concerns.
> 

Redis jettisons all consistency on partitions... It's really ugly:

https://aphyr.com/posts/307-call-me-maybe-redis-redux

These results are catastrophic. In a partition which lasted for
roughly 45% of the test, 45% of acknowledged writes were thrown
away. To add insult to injury, Redis preserved all the failed writes
in place of the successful ones.

So... yeah. I actually think it is dangerous to have Redis in tooz at
all. One partition and you have split brains, locks granted to multiple
places, and basically the pure chaos that you were trying to prevent by
using a lock in the first place. If you're using redis, the only sane
thing to do is to shut everything down when there's a partition (which
is not easy to detect!).

To contrast this with Zookeeper and Consul:

https://aphyr.com/posts/291-call-me-maybe-zookeeper
https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul

Even though etcd and consul ended up suffering from stale reads, they
added pieces to their API that allow fully consistent reads (presumably
suffering a performance penalty when doing so).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Networking Subteam Meetings

2015-11-05 Thread Lee Calcote
It seems integration of the SocketPlane acquisition has come to fruition in 1.9…

Lee

> On Nov 5, 2015, at 1:18 PM, Daneyon Hansen (danehans)  
> wrote:
> 
> All,
> 
> I apologize for issues with today's meeting. My calendar was updated to 
> reflect daylight savings and displayed an incorrect meeting start time. This 
> issue is now resolved. We will meet on 11/12 at 18:30 UTC. The meeting has 
> been pushed back 30 minutes from our usual start time. This is because Docker 
> is hosting a Meetup [1] to discuss the new 1.9 networking features. I 
> encourage everyone to attend the Meetup.
> 
> [1] http://www.meetup.com/Docker-Online-Meetup/events/226522306/ 
> 
> [2] 
> https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting
>  
> 
> 
> Regards,
> Daneyon Hansen
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Bugs status

2015-11-05 Thread Dmitry Pyzhov
Hi guys,

new report is based on 'area' tags. I'm sorry for hardly readable heap of
numbers. Here are values for current numbers of open bugs, number of bugs
opened since last Thursday and number of closed bug for the same period.

Bugs in python, library and UI areas. Format: Total open(UI open/Python
open/Library open) +Total income (UI/Python/Library) -Total
outcome(UI/Python/Library)

Defects:
- Critical/high: 58(5/33/20) +23(1/8/14) -27(2/6/19)
- Medium/low: 199(44/103/52) +14(1/4/9) -21(0/13/8)
Features tracked as bug reports:
- Critical/high: 38(1/31/6) +1(0/1/0) -3(1/1/1)
- Medium/low: 79(3/61/15) +2(0/1/1) -3(0/1/2)
Technical debt bugs:
- Critical/high: 14(0/9/5) +2(0/2/0) -3(0/2/1)
- Medium/low: 91(1/68/22) +4(0/2/2) -6(0/4/2)

Let me decrypt first row that is important. We have 58 high and critical
priority open bugs. 5 of them are in UI 33 are in python and 20 are in
library. In last 7 days we've got 23 new bugs and closed 27.

Little bit more about high and critical priority bugs. In library we fixed
as much bugs as we have in total. It means this number doesn't depend on
our fixing speed any more. The only way this number can be reduced is by
reducing of bugs income.

In python we have 33 high/critical bugs and 15 of them are related to
features being developed. We have several really tricky bugs but we are
close to the end of the queue. It doesn't looks like we can do anything to
significantly reduce number of bugs here. We getting new bugs and we fixing
them.

I hope that we'll be able to focus on 14 high priority tech-debt and 155
medium priority bugs soon. It highly depends on new findings.

Bugs in other teams. Format: open total(open high) +income total(income
high) -outcome total(outcome high).
- QA: 71(21) +24(13) -21(13)
- Docs: 156(35) +6(2) -2(0)
- Devops: 62(24) +10(5) -10(8)
- Build: 43(12) +11(9) -20(13)
- CI: 63(31) +10(7) -11(10)
- MOS: 45(15) +7(4) -3(1)
- Partners: 12(5) +0(0) -0(0)
- MOS Linux: 15(5) +0(0) -1(0)
- Plugins: 3(1) +1(1) -3(2)

Let me explain first row as an example. We have 71 bugs on QA, 21 of them
have high or critical priority. 24 new bugs were created in last 7 days, 13
of them are high/critical. 21 bugs were closed during same period. 13
closed bugs have high or critical priority.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Nova] Tempest Hypervisor Feature Tagging

2015-11-05 Thread Rafael Folco
Is there any way to know what hypervisor features[1] were tested in a Tempest 
run? 
From what I’ve seen, currently there is no way to tell what tests cover what 
features.
Looks like Tempest has UUID and service tagging, but no reference to the 
hypervisor features.

It would be good to track/map covered features and generate a report for CI.
In case of any interest in that, I’d like to validate if the metadata tagging 
(similar to UUID) is a reasonable approach.

[1] http://docs.openstack.org/developer/nova/support-matrix.html 


Thanks.

-rfolco

Rafael Folco
OpenStack Continuous Integration
IBM Linux Technology Center



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Networking Subteam Meetings

2015-11-05 Thread Daneyon Hansen (danehans)
All,

I apologize for issues with today's meeting. My calendar was updated to reflect 
daylight savings and displayed an incorrect meeting start time. This issue is 
now resolved. We will meet on 11/12 at 18:30 UTC. The meeting has been pushed 
back 30 minutes from our usual start time. This is because Docker is hosting a 
Meetup [1] to discuss the new 1.9 networking features. I encourage everyone to 
attend the Meetup.

[1] http://www.meetup.com/Docker-Online-Meetup/events/226522306/
[2] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-05 Thread Adrian Otto
Sometimes producing alternate implementations can be more effective than 
abstract discussions because they are more concrete. If an implementation can 
be produced (possibly multiple different implementations by different 
contributors) in a short period of time without significant effort, that’s 
usually better than a lengthy discussion. Keep in mind that even a WIP review 
can be helpful for facilitating this sort of a discussion. Having a talk about 
a specific review is usually much more effective than when the discussion is 
happening completely in abstract terms.

Keep in mind that many OpenStack contributors speak English as a second 
language. They may actually be much more effective in expressing their ideas in 
code rather than in the form of a debate. Using alternate implementations for 
something is one way to let these contributors shine with a novel idea, even if 
they struggle to articulate themselves or feel uncomfortable in a verbal debate.

If you are about to go implement something that takes a significant effort, 
then it would be annoying to have an alternate implantation show up and you’ll 
feel like your work goes to waste. The way to prevent this is to encourage all 
active contributors to share ideas in the project IRC channel, and show up 
regularly to the team meetings, and covey your intent to the technical lead. If 
you are surprised by alternate implementations for your work, that’s a symptom 
that one or more of you are not well coordinated. If we solve that, everyone 
can potentially move more quickly. Anyone struggling with this problem might 
consider the guidance I offered in Vancouver [1].

Adrian

[1] 
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/7-habits-of-highly-effective-contributors

On Nov 4, 2015, at 7:04 PM, Vikas Choudhary 
mailto:choudharyvika...@gmail.com>> wrote:


If we see from the angle of the contributor whose approach would not be better 
than other competing one, it will be far easy for him to accept logic at 
discussion stage rather after weeks of tracking review request and addressing 
review comments.

On 5 Nov 2015 08:24, "Vikas Choudhary" 
mailto:choudharyvika...@gmail.com>> wrote:

@Toni ,

In scenarios where two developers, with different implementation approaches, 
are not able to reach any consensus over gerrit or ml, IMO, other core members 
can do a voting or discussion and then PTL should take a call which one to 
accept and allow for implementation. Anyways community has to make a call even 
after implementations, so why to unnecessary waste effort in implementation.
WDYT?

On 4 Nov 2015 19:35, "Baohua Yang" 
mailto:yangbao...@gmail.com>> wrote:
Sure, thanks!
And suggest add the time and channel information at the kuryr wiki page.


On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon 
mailto:toni+openstac...@midokura.com>> wrote:


On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang 
mailto:yangbao...@gmail.com>> wrote:
+1, Antoni!
btw, is our weekly meeting still on meeting-4 channel?
Not found it there yesterday.

Yes, it is still on openstack-meeting-4, but this week we skipped it, since 
some of us were
traveling and we already held the meeting on Friday. Next Monday it will be 
held as usual
and the following week we start alternating (we have yet to get a room for that 
one).

On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon 
mailto:toni+openstac...@midokura.com>> wrote:
Hi Kuryrs,

Last Friday, as part of the contributors meetup, we discussed also code 
contribution etiquette. Like other OpenStack project (Magnum comes to mind), 
the etiquette for what to do when there is disagreement in the way to code a 
blueprint of fix a bug is as follows:

1.- Try to reach out so that the original implementation gets closer to a 
compromise by having the discussion in gerrit (and Mailing list if it requires 
a wider range of arguments).
2.- If a compromise can't be reached, feel free to make a separate 
implementation arguing well its difference, virtues and comparative 
disadvantages. We trust the whole community of reviewers to be able to judge 
which is the best implementation and I expect that often the reviewers will 
steer both submissions closer than they originally were.
3.- If both competing implementations get the necessary support, the core 
reviewers will take a specific decision on which to take based on technical 
merit. Important factor are:
* conciseness,
* simplicity,
* loose coupling,
* logging and error reporting,
* test coverage,
* extensibility (when an immediate pending and blueprinted feature can 
better be built on top of it).
* documentation,
* performance.

It is important to remember that technical disagreement is a healthy thing and 
should be tackled with civility. If we follow the rules above, it will lead to 
a healthier project and a more friendly community in which everybody can 
propose their vision with equal standing. Of course, sometimes there m

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Robert Collins
On 5 November 2015 at 11:32, Fox, Kevin M  wrote:
> To clarify that statement a little more,
>
> Speaking only for myself as an op, I don't want to support yet one more 
> snowflake in a sea of snowflakes, that works differently then all the rest, 
> without a very good reason.
>
> Java has its own set of issues associated with the JVM. Care, and feeding 
> sorts of things. If we are to invest time/money/people in learning how to 
> properly maintain it, its easier to justify if its not just a one off for 
> just DLM,
>
> So I wouldn't go so far as to say we're vehemently opposed to java, just that 
> DLM on its own is probably not a strong enough feature all on its own to 
> justify requiring pulling in java. Its been only a very recent thing that you 
> could convince folks that DLM was needed at all. So either make java 
> optional, or find some other use cases that needs java badly enough that you 
> can make java a required component. I suspect some day searchlight might be 
> compelling enough for that, but not today.
>
> As for the default, the default should be good reference. if most sites would 
> run with etc or something else since java isn't needed, then don't default 
> zookeeper on.

So lets be clear about the discussion at the summit.

There were three, non-conflicting and distinct concerns raised about Java.

One is the 'its a new platform for us operators to understand
operations around' - which is fair, and indeed, Java has different
(not better, different) behaviours to the CPython VM.

Secondly, 'us operators do not want to be a special snowflake, we
*want* to run the majority configuration' - which makes sense, and is
one reason to aim for a convergent stack where possible.

Thirdly, 'many of our customers *will not* run Oracle's JVM and the
stability and performance of Zookeeper on openjdk is an unknown'. The
argument was that we can't pick zk because the herd run it on Oracle's
JVM not openjdk - now there are some unquantified bits here, but it is
known that openjdk has had sufficient differences to Oracle JVM to
cause subtle bugs, so if most large zk shops are running Oracle JVM
then indeed this becomes a special-snowflake risk.

I don't recall *anyone* saying they thought zk was bad, or that they
would refuse to run it if we had chosen zk rather than tooz. We got
stuck on that third issue - there was no way to answer it in the
session, and its obviously a terrifying risk to take.

And because for every option some operators were going to be unhappy,
we fell back to the choice of not making a choice.

There are a bunch of parameters around DLM usage that we haven't
quantified yet - we can talk capabilities sensibly, but we don't yet
know how much load we will put on the DLM, nor how it will scale
relative to cloud size. My naive expectation is that we'll need a
-very- large cloud to stress the cluster size of any decent DLM, but
that request rate / latency could be a potential issue as clouds scale
(e.g. need care and feeding).

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread Nader Lahouti
Thanks for the pointer, I'll look into it. But one question, by calling
stop() and then wait(), does it mean the application has to call start()
again after the wait()? to process more messages?

I am also using
http://docs.openstack.org/developer/oslo.messaging/server.html for the RPC
server
Does it mean there has to be stop() and then wait() there as well?


Thanks,
Nader.



On Thu, Nov 5, 2015 at 10:19 AM, gord chung  wrote:

>
>
> On 05/11/2015 1:06 PM, Nader Lahouti wrote:
>
>> Hi Doug,
>>
>> I have an app that listens to notifications and used the info provided in
>>
>> http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
>>
>>
>> Basically I create
>> 1. NotificationEndpoints(object):
>>
>> https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
>> 2. NotifcationListener(object):
>>
>> https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
>> 3. and call start() and  then wait()
>>
>
> the correct usage is to call stop() before wait()[1]. for reference on how
> to use listeners, you can see Ceilometer[2]
>
> [1]
> http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
> [2]
> https://github.com/openstack/ceilometer/blob/master/ceilometer/utils.py#L250
>
> --
> gord
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Joshua Harlow

Sean Dague wrote:

On 11/05/2015 06:00 AM, Thierry Carrez wrote:

Hayes, Graham wrote:

On 04/11/15 20:04, Ed Leafe wrote:

On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:

Here's a Devstack review for zookeeper in support of this initiative:

https://review.openstack.org/241040

Thanks,
Dims

I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe


I got the impression that there was *some* operators that wouldn't run
java.


I feel like I'd like to see that with data. Because every Ops session
I've been in around logging and debugging has had nearly everyone raise
their hand that they are running the ELK stack for log analysis. So they
are all running Java already.

I would absolutely hate to have some design point get made based on
rumors from ops and "java is icky" sentiment from the dev space.

Defaults matter, because it means you get a critical mass of operators
running similar configs, and they can build and share knowledge. For all
of the issues with Rabbit, it has demonstrably been good to have
collaboration in the field between operators that have shared patterns
and fed back the issues. So we should really say Zookeeper is the
default choice, even if there are others people could choose that have
extra mustachy / monocle goodness.



+1 from me

I mean I get that there will be some person out there that will say 'no 
icky thats java' but said type of people will *always* exist, no matter 
what the situation and if we are basing sound technical decisions on 
that one (and/or small set of people) person it makes me wonder what the 
heck we are doing...


Because that's totally crazy (IMHO). After a while we need to listen to 
the 99% and make a solution targeted at them, and accept that we will 
not make 100% of people happy all the time. This is why I personally 
like being opinionated and I think/thought that openstack as a group had 
matured enough to do this (but I see that it still isn't ready to do this).


My 2 cents,

-Josh


-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] Kilo is 'security-supported'. What does it imply?

2015-11-05 Thread Vasudevan, Swaminathan (PNB Roseville)
+1

-Original Message-
From: Carl Baldwin [mailto:c...@ecbaldwin.net] 
Sent: Thursday, November 05, 2015 10:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [stable][neutron] Kilo is 'security-supported'. 
What does it imply?

On Thu, Nov 5, 2015 at 8:17 AM, Ihar Hrachyshka  wrote:
> - Releases page on wiki [2] calls the branch ‘Security-supported’ (and 
> it’s not clear what it implies)

I saw this same thing yesterday when it was pointed out in the DVR IRC meeting 
[1].  I have a hard time believing that we want to abandon bug fix support for 
Kilo especially given recent attempts to be more proactive about it [2] (which 
I applaud).  I suspect that there has simply been a mis-communication and we 
need to get the story straight in the wiki pages which Ihar pointed out.

> - StableBranch page though requires that we don’t merge non-critical 
> bug fixes there: "Only critical bugfixes and security patches are acceptable”

Seems a little premature for Kilo.  It is little more than 6 months old.

> Some projects may want to continue backporting reasonable (even though
> non-critical) fixes to older stable branches. F.e. in neutron, I think 
> there is will to continue providing backports for the branch.

+1  I'd like to reiterate my support for backporting appropriate and
sensible bug fixes to Kilo.

> I wonder though whether we would not break some global openstack rules 
> by continuing with those backports. Are projects actually limited 
> about what types of bug fixes are supposed to go in stable branches, 
> or we embrace different models of stable maintenance and allow for 
> some freedom per project?

Carl

[1] 
http://eavesdrop.openstack.org/meetings/neutron_dvr/2015/neutron_dvr.2015-11-04-15.00.log.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077236.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] Kilo is 'security-supported'. What does it imply?

2015-11-05 Thread Carl Baldwin
On Thu, Nov 5, 2015 at 8:17 AM, Ihar Hrachyshka  wrote:
> - Releases page on wiki [2] calls the branch ‘Security-supported’ (and it’s
> not clear what it implies)

I saw this same thing yesterday when it was pointed out in the DVR IRC
meeting [1].  I have a hard time believing that we want to abandon bug
fix support for Kilo especially given recent attempts to be more
proactive about it [2] (which I applaud).  I suspect that there has
simply been a mis-communication and we need to get the story straight
in the wiki pages which Ihar pointed out.

> - StableBranch page though requires that we don’t merge non-critical bug
> fixes there: "Only critical bugfixes and security patches are acceptable”

Seems a little premature for Kilo.  It is little more than 6 months old.

> Some projects may want to continue backporting reasonable (even though
> non-critical) fixes to older stable branches. F.e. in neutron, I think there
> is will to continue providing backports for the branch.

+1  I'd like to reiterate my support for backporting appropriate and
sensible bug fixes to Kilo.

> I wonder though whether we would not break some global openstack rules by
> continuing with those backports. Are projects actually limited about what
> types of bug fixes are supposed to go in stable branches, or we embrace
> different models of stable maintenance and allow for some freedom per
> project?

Carl

[1] 
http://eavesdrop.openstack.org/meetings/neutron_dvr/2015/neutron_dvr.2015-11-04-15.00.log.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077236.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread gord chung



On 05/11/2015 1:06 PM, Nader Lahouti wrote:

Hi Doug,

I have an app that listens to notifications and used the info provided in
http://docs.openstack.org/developer/oslo.messaging/notification_listener.html


Basically I create
1. NotificationEndpoints(object):
https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
2. NotifcationListener(object):
https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
3. and call start() and  then wait()


the correct usage is to call stop() before wait()[1]. for reference on 
how to use listeners, you can see Ceilometer[2]


[1]http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
[2] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/utils.py#L250


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] can we deprecate the xvp console?

2015-11-05 Thread Andrew Laski

On 11/05/15 at 10:39am, Matt Riedemann wrote:
I noticed today that nova.console.xvp hits the database directly for 
console pools. We should convert this to objects so that the console 
service does not have direct access to the database (this is the only 
console I see that hits the database directly). However, rather than 
go through the work of create an object for ConsolePools, if no one 
is using xvp consoles in nova then we could deprecate it.


It looks like it was added back in diablo [1] (at least).

Someone from Rackspace in IRC said that they weren't using it, so 
given it's for xenserver I assume that means probably no one is using 
it, but we need to ask first.


So apparently I was wrong.  We are using both novnc and xvpvncproxy in 
an attempt to eventually get off of xvpvnxproxy.  It's possible that 
nobody else is using it, but Rackspace at least is for now.




Please respond else I'll probably move forward with deprecation at 
some point in mitaka-1.


[1] 
https://github.com/openstack/nova/commit/b437a98738c7a564205d1b27e36b844cd54445d1

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Summit recap

2015-11-05 Thread Tim Hinrichs
Hi all,

It was great seeing so many Congress people in Tokyo last week!  Hopefully
you've all had a chance to recover by now.  Here's an overview of what
happened.  I was planning to go over this at this week's IRC meeting, but
forgot about the U.S. time change and missed the meeting--sorry about that.

1. Hands On Lab.   There were 40-50 people who attended, and all but 3-4 of
them got the VM we provided installed and worked through the lab.  1 of the
failures didn't have enough memory; 1 was something to do with VDX (?
Eric--is that right?); 1 was a version of Linux for which there wasn't a
VirtualBox installer.  The only weird problem was a glitch with the Horizon
interface that wouldn't show a table that we could show on the command
line.  Overall, people seemed to like Congress and what it had to offer.

2. Working session: distributed architecture
Base class is working with oslo-messaging, but unit tests are not working.
Peter is planning to debug and push to review in the next few weeks.

One thing we discussed was that the distributed architecture is only a
building block for an HA design.  But it does not deliver HA.  In
particular, for HA we will want to have multiple copies of the policy
engine, and these copies should be hidden from the user; the system should
take care of mapping an API call intended for the policy engine to one of
the copies.  The distributed architecture does not hide the existence of
multiple policy engines; rather, the user is responsible for spinning up
multiple policy engines, giving them different names, and directing API
requests to whichever one of the policy engines she wants to interact with.

3. Working session: infrastructure/testing
- We agreed to add Murano tests to our gate (as non-voting) to ensure that
we know when we add something to Congress that breaks Murano.  Should be
sufficient to simply copy their jenkins job into the Congress job-list and
make that job non-voting.

- We discussed the problem of datasource drivers, where to store them, and
how to test them.  Neutron has a similar issue with vendor-specific
plugins.  We thought it would be nice to have a separate requirements.txt
file for each driver; but then it is unclear how to test datasource drivers
in the gate because setup.py only installs the 1 requirements.txt in the
root directory.  So in the end, we decided the right thing was to have 1
requirements.txt file that includes all the dependencies for the OpenStack
drivers so that we can test those in the gate, and to have a separate
requirements.txt for each of the non-OpenStack drivers, since we can't test
those in the gate anyway.

4. Working session: Monasca and NFV.
- Fabio introduced us to Monasca, which is a monitoring project about to be
accepted into the BigTent.  It is an alternative to Ceilometer and focused
on high-performance.  They have alarms that can be set to inform the caller
any time a certain kind of event occurs.  Monasca is supposed to get a
superset of the data that Congress currently has drivers for.  They
suggested that Congress could automatically generate alarms based on the
data required by policy.  As a first step, we decided to write a simple
datasource driver to integrate with Monasca, as an easy way for the
Congress team to get familiar with Monasca.

- OPNFV Doctor project.  The Doctor project aims to detect and manage
faults in OPNFV platforms.  They hoped to use Congress to help identify
faults.  They wanted to connect Zabbix to Congress, which creates events
and have Congress push out config changes.  Concretely they asked for a
push-style datasource driver so that Zabbix could push data to Congress
through the API.  The blueprint for that work is here:
https://blueprints.launchpad.net/congress/+spec/push-type-datasource-driver

5. Discussion about Application-level Intent.

Outside the working sessions we talked with Ken Owens and his team about
application-level intent.  They are planning on building an
application-specific policy engine within the Congress framework.  For each
VM in an application, the user can rank the sensitivity of that VM as
low/medium/high for a handful of properties, e.g. latency, throughput.  The
provisioning system (which is external to Congress) then provisions the app
according to that policy, and the policy engine within Congress continually
monitors those properties and corrects violations.  The plan is to start
this as a completely standalone policy engine running a Congress node but
build it with an eye toward eventually delegating from the agnostic policy
engine to the application-intent engine.

6. Senlin project.  I heard about this project for the first time at the
summit.  It's policy-based cluster management.  Here's an email with more
details.

http://lists.openstack.org/pipermail/openstack-dev/2015-November/078498.html

It'd be great if those attended could respond with clarifications,
comments, and things I missed.

Let me know if anyone has questions/comments.
Tim
__

[openstack-dev] [Nova] attaching and detaching volumes in the API

2015-11-05 Thread Murray, Paul (HP Cloud)
Normally operations on instances are synchronized at the compute node. In some 
cases it is necessary to synchronize somehow at the API. I have one of those 
cases and wondered what is a good way to go about it.

As part of this spec: https://review.openstack.org/#/c/221732/

I want to attach/detach volumes (and so manipulate block device mappings) when 
an instance is not on any compute node (actually when in shelved). Normally 
this happens in a function on the compute manager synchronized on the instance 
uuid. When an instance is in the shelved_offloaded state it is not on a compute 
host, so the operations have to be done at the API (an existing example is when 
the instance deleted in this state - the cleanup is done in the API but is not 
synchronized in this case).

One option I can see is using tack states, using expected_task_state parameter 
in instance.save() to control state transitions. In the API this makes sense as 
the calls will be synchronous so if an operation cannot be done it can be 
reported back to the user in an error return. I'm sure there must be some other 
options.

Any suggestions would be welcome.

Paul

Paul Murray
Nova Technical Lead, HP Cloud
Hewlett Packard Enterprise
+44 117 316 2527



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][bugs] Developers Guide: Who's merging that?

2015-11-05 Thread Jeremy Stanley
On 2015-11-05 16:23:56 +0100 (+0100), Markus Zoeller wrote:
> some months ago I wrote down all the things a developer should know
> about the bug handling process in general [1]. It is written as a
> project agnostic thing and got some +1s but it isn't merged yet.
> It would be helpful when I could use it to give this as a pointer
> to new contributors as I'm under the impression that the mental image
> differs a lot among the contributors. So, my questions are:
> 
> 1) Who's in charge of merging such non-project-specific things?
[...]

This is a big part of the problem your addition is facing, in my
opinion. The OpenStack Infrastructure Manual is an attempt at a
technical manual for interfacing with the systems written and
maintained by the OpenStack Project Infrastructure team. It has,
unfortunately, also grown some sections which contain cultural
background and related recommendations because until recently there
was no better venue for those topics, but we're going to be ripping
those out and proposing them to documents maintained by more
appropriate teams at the earliest opportunity.

Bug management falls into a grey area currently, where a lot of the
information contributors need is cultural background mixed with
workflow information on using Launchpad (which is not really managed
by the Infra team). Some of the material there is still a fit for
the Infra Manual insofar as we do intend to start maintaining a
defect and task tracker for the OpenStack community in the near
future, so information on how to use Launchpad is probably an
acceptable placeholder until that's ready (however much of it should
likely just link to Launchpad's own documentation for now).

Cultural content about the lifecycle of bugs, standard practices for
triage, et cetera are likely better suited to the newly created
Project Team Guide; and then there's another class of content in
your proposed addition, content which is primarily of interest to
people reporting bugs for the first time. The Developer Guide
audience doesn't, I think, have a lot of overlap with
users/deployers who need guidance on what sort of information to put
in a bug report. Unfortunately, I don't have any great suggestions
for another community-maintained document which aligns well with
that target audience either.

So anyway, to my main point, topics in collaboratively-maintained
documentation are going to end up being closely tied to the
expertise of the review team for the document being targeted. In the
case of the Infra Manual that's the systems administrators who
configure and maintain our community infrastructure. I won't speak
for others on the team, but I don't personally feel comfortable
deciding what details a user should include in a bug report for
python-novaclient, or how the Cinder team should triage their bug
reports.

I expect that the lack of core reviews are due to:

1. Few of the core reviewers feel they can accurately judge much of
the content you've proposed in that change.

2. Nobody feels empowered to tell you that this large and
well-written piece of documentation you've spent a lot of time
putting together is a poor fit and should be split up and much of it
put somewhere else more suitable (especially without a suggestion as
to where that might be).

3. The core review team for this is the core review team for all our
infrastructure systems, and we're all unfortunately very behind in
handling the current review volume.

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > Can people help me work through the right set of tools for this use case 
> > (has come up from several Operators) and map out a plan to implement it:
> > 
> > Large cloud with many users coming from multiple Federation sources has 
> > a policy of providing a minimal setup for each user upon first visit to 
> > the cloud:  Create a project for the user with a minimal quota, and 
> > provide them a role assignment.
> > 
> > Here are the gaps, as I see it:
> > 
> > 1.  Keystone provides a notification that a user has logged in, but 
> > there is nothing capable of executing on this notification at the 
> > moment.  Only Ceilometer listens to Keystone notifications.
> > 
> > 2.  Keystone does not have a workflow engine, and should not be 
> > auto-creating projects.  This is something that should be performed via 
> > a Heat template, and Keystone does not know about Heat, nor should it.
> > 
> > 3.  The Mapping code is pretty static; it assumes a user entry or a 
> > group entry in identity when creating a role assignment, and neither 
> > will exist.
> > 
> > We can assume a special domain for Federated users to have per-user 
> > projects.
> > 
> > So; lets assume a Heat Template that does the following:
> > 
> > 1. Creates a user in the per-user-projects domain
> > 2. Assigns a role to the Federated user in that project
> > 3. Sets the minimal quota for the user
> > 4. Somehow notifies the user that the project has been set up.
> > 
> > This last probably assumes an email address from the Federated 
> > assertion.  Otherwise, the user hits Horizon, gets a "not authenticated 
> > for any projects" error, and is stumped.
> > 
> > How is quota assignment done in the other projects now?  What happens 
> > when a project is created in Keystone?  Does that information gets 
> > transferred to the other services, and, if so, how?  Do most people use 
> > a custom provisioning tool for this workflow?
> > 
> 
> I know at Dreamhost we built some custom integration that was triggered
> when someone turned on the Dreamcompute service in their account in our
> existing user management system. That integration created the account in
> keystone, set up a default network in neutron, etc. I've long thought we
> needed a "new tenant creation" service of some sort, that sits outside
> of our existing services and pokes them to do something when a new
> tenant is established. Using heat as the implementation makes sense, for
> things that heat can control, but we don't want keystone to depend on
> heat and we don't want to bake such a specialized feature into heat
> itself.
> 

I agree, an automation piece that is built-in and easy to add to
OpenStack would be great.

I do not agree that it should be Heat. Heat is for managing stacks that
live on and change over time and thus need the complexity of the graph
model Heat presents.

I'd actually say that Mistral or Ansible are better choices for this. A
service which listens to the notification bus and triggered a workflow
defined somewhere in either Ansible playbooks or Mistral's workflow
language would simply run through the "skel" workflow for each user.

The actual workflow would probably almost always be somewhat site
specific, but it would make sense for Keystone to include a few basic ones
as "contrib" elements. For instance, the "notify the user" piece would
likely be simplest if you just let the workflow tool send an email. But
if your cloud has Zaqar, you may want to use that as well or instead.

Adding Mistral here to see if they have some thoughts on how this
might work.

BTW, if this does form into a new project, I suggest naming it
Skeleton[1]

[1] https://goo.gl/photos/EML6EPKeqRXioWfd8 (that was my front yard..)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread Nader Lahouti
Hi Doug,

I have an app that listens to notifications and used the info provided in
http://docs.openstack.org/developer/oslo.messaging/notification_listener.html


Basically I create
1. NotificationEndpoints(object):
https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
2. NotifcationListener(object):
https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
3. and call start() and  then wait()


Thanks,
Nader.



On Thu, Nov 5, 2015 at 5:27 AM, Doug Hellmann  wrote:

> Excerpts from Nader Lahouti's message of 2015-11-04 21:25:15 -0800:
> > Hi,
> >
> > I'm seeing the below warning message continuously:
> >
> > 2015-11-04 21:09:38  WARNING [oslo_messaging.server] wait() should have
> > been called after stop() as wait() waits for existing messages to finish
> > processing, it has been 692.98 seconds and stop() still has not been
> called
> >
> > How to avoid this waring message? Anything needs to be changed when using
> > the notification API with the latest oslo_messaging?
> >
> > Thanks,
> > Nader.
>
> What version of what application is producing the message?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Doug Hellmann
Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> Can people help me work through the right set of tools for this use case 
> (has come up from several Operators) and map out a plan to implement it:
> 
> Large cloud with many users coming from multiple Federation sources has 
> a policy of providing a minimal setup for each user upon first visit to 
> the cloud:  Create a project for the user with a minimal quota, and 
> provide them a role assignment.
> 
> Here are the gaps, as I see it:
> 
> 1.  Keystone provides a notification that a user has logged in, but 
> there is nothing capable of executing on this notification at the 
> moment.  Only Ceilometer listens to Keystone notifications.
> 
> 2.  Keystone does not have a workflow engine, and should not be 
> auto-creating projects.  This is something that should be performed via 
> a Heat template, and Keystone does not know about Heat, nor should it.
> 
> 3.  The Mapping code is pretty static; it assumes a user entry or a 
> group entry in identity when creating a role assignment, and neither 
> will exist.
> 
> We can assume a special domain for Federated users to have per-user 
> projects.
> 
> So; lets assume a Heat Template that does the following:
> 
> 1. Creates a user in the per-user-projects domain
> 2. Assigns a role to the Federated user in that project
> 3. Sets the minimal quota for the user
> 4. Somehow notifies the user that the project has been set up.
> 
> This last probably assumes an email address from the Federated 
> assertion.  Otherwise, the user hits Horizon, gets a "not authenticated 
> for any projects" error, and is stumped.
> 
> How is quota assignment done in the other projects now?  What happens 
> when a project is created in Keystone?  Does that information gets 
> transferred to the other services, and, if so, how?  Do most people use 
> a custom provisioning tool for this workflow?
> 

I know at Dreamhost we built some custom integration that was triggered
when someone turned on the Dreamcompute service in their account in our
existing user management system. That integration created the account in
keystone, set up a default network in neutron, etc. I've long thought we
needed a "new tenant creation" service of some sort, that sits outside
of our existing services and pokes them to do something when a new
tenant is established. Using heat as the implementation makes sense, for
things that heat can control, but we don't want keystone to depend on
heat and we don't want to bake such a specialized feature into heat
itself.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Kyle Mestery
On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:

> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful. Some other projects already do things like
>> this.
>>
>
> :( Actually, though "powerful" it also leads to implementation details
> leaking directly out of the public REST API. I'm very negative on this and
> would prefer an actual codified REST API that can be relied on regardless
> of backend driver or implementation.
>

I agree with Jay here. We've had people propose similar things in Neutron
before, and I've been against them. The entire point of the Neutron REST
API is to not leak these details out. It dampens the strength of the
logical model, and it tends to have users become reliant on backend
implementations.


>
> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>> 'extras' arbitrary JSON field. This allows us to put any information in
>> there that we think is important for us.
>>
>
> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
> structured, not a Wild West free-for-all. The biggest problem with using
> free-form JSON blobs in RESTful APIs like this is that you throw away the
> ability to evolve the API in a structured, versioned way. Instead of
> evolving the API using microversions, instead every vendor just jams
> whatever they feel like into the JSON blob over time. There's no way for
> clients to know what the server will return at any given time.
>
> Achieving consensus on a REST API that meets the needs of a variety of
> backend implementations is *hard work*, yes, but it's what we need to do if
> we are to have APIs that are viewed in the industry as stable,
> discoverable, and reliably useful.
>

++, this is the correct way forward.

Thanks,
Kyle


>
> Best,
> -jay
>
> Best,
> -jay
>
> Hoping to get some positive feedback from API and DB lieutenants too.
>>
>>
>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>> mailto:salv.orla...@gmail.com>> wrote:
>>
>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>> API, as well as other constraints which might be imposed for
>> versioning or portability purposes.
>> The parameters that should end up in such blob are typically
>> specific for the target IPAM driver (to an extent they might even
>> identify a specific driver to use), and therefore an API consumer
>> who knows what backend is performing IPAM can surely leverage it.
>>
>> Therefore this would make a lot of sense, assuming API portability
>> and not leaking backend details are not a concern.
>> The Neutron team API & DB lieutenants will be able to provide more
>> input on this regard.
>>
>> In this case other approaches such as a vendor specific extension
>> are not a solution - assuming your granularity level is the
>> allocation pool; indeed allocation pools are not first-class neutron
>> resources, and it is not therefore possible to have APIs which
>> associate vendor specific properties to allocation pools.
>>
>> Salvatore
>>
>> On 4 November 2015 at 21:46, Shraddha Pandhe
>> mailto:spandhe.openst...@gmail.com>>
>> wrote:
>>
>> Hi folks,
>>
>> I have a small question/suggestion about IPAM.
>>
>> With IPAM, we are allowing users to have their own IPAM drivers
>> so that they can manage IP allocation. The problem is, the new
>> ipam tables in the database have the same columns as the old
>> tables. So, as a user, if I want to have my own logic for ip
>> allocation, I can't actually get any help from the database.
>> Whereas, if we had an arbitrary json blob in the ipam tables, I
>> could put any useful information/tags there, that can help me
>> for allocation.
>>
>> Does this make sense?
>>
>> e.g. If I want to create multiple allocation pools in a subnet
>> and use them for different purposes, I would need some sort of
>> tag for each allocation pool for identification. Right now,
>> there is no scope for doing something like that.
>>
>> Any thoughts? If there are any other way to solve the problem,
>> please let me know
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <
>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscri

[openstack-dev] [keystone] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Adam Young
Can people help me work through the right set of tools for this use case 
(has come up from several Operators) and map out a plan to implement it:


Large cloud with many users coming from multiple Federation sources has 
a policy of providing a minimal setup for each user upon first visit to 
the cloud:  Create a project for the user with a minimal quota, and 
provide them a role assignment.


Here are the gaps, as I see it:

1.  Keystone provides a notification that a user has logged in, but 
there is nothing capable of executing on this notification at the 
moment.  Only Ceilometer listens to Keystone notifications.


2.  Keystone does not have a workflow engine, and should not be 
auto-creating projects.  This is something that should be performed via 
a Heat template, and Keystone does not know about Heat, nor should it.


3.  The Mapping code is pretty static; it assumes a user entry or a 
group entry in identity when creating a role assignment, and neither 
will exist.


We can assume a special domain for Federated users to have per-user 
projects.


So; lets assume a Heat Template that does the following:

1. Creates a user in the per-user-projects domain
2. Assigns a role to the Federated user in that project
3. Sets the minimal quota for the user
4. Somehow notifies the user that the project has been set up.

This last probably assumes an email address from the Federated 
assertion.  Otherwise, the user hits Horizon, gets a "not authenticated 
for any projects" error, and is stumped.


How is quota assignment done in the other projects now?  What happens 
when a project is created in Keystone?  Does that information gets 
transferred to the other services, and, if so, how?  Do most people use 
a custom provisioning tool for this workflow?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Jim Rollenhagen
On Thu, Nov 05, 2015 at 11:55:50AM -0500, Jay Pipes wrote:
> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
> >Hi Salvatore,
> >
> >Thanks for the feedback. I agree with you that arbitrary JSON blobs will
> >make IPAM much more powerful. Some other projects already do things like
> >this.
> 
> :( Actually, though "powerful" it also leads to implementation details
> leaking directly out of the public REST API. I'm very negative on this and
> would prefer an actual codified REST API that can be relied on regardless of
> backend driver or implementation.
> 
> >e.g. In Ironic, node has driver_info, which is JSON. it also has an
> >'extras' arbitrary JSON field. This allows us to put any information in
> >there that we think is important for us.
> 
> Yeah, and this is a bad thing, IMHO. Public REST APIs should be structured,
> not a Wild West free-for-all. The biggest problem with using free-form JSON
> blobs in RESTful APIs like this is that you throw away the ability to evolve
> the API in a structured, versioned way. Instead of evolving the API using
> microversions, instead every vendor just jams whatever they feel like into
> the JSON blob over time. There's no way for clients to know what the server
> will return at any given time.

Right, this has caused Ironic some pain in the past (though it does make
it easier for drivers to add some random info they need). I'd like to
try to move away from this sometime soon(tm).

// jim

> 
> Achieving consensus on a REST API that meets the needs of a variety of
> backend implementations is *hard work*, yes, but it's what we need to do if
> we are to have APIs that are viewed in the industry as stable, discoverable,
> and reliably useful.
> 
> Best,
> -jay
> 
> Best,
> -jay
> 
> >Hoping to get some positive feedback from API and DB lieutenants too.
> >
> >
> >On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
> >mailto:salv.orla...@gmail.com>> wrote:
> >
> >Arbitrary blobs are a powerful tools to circumvent limitations of an
> >API, as well as other constraints which might be imposed for
> >versioning or portability purposes.
> >The parameters that should end up in such blob are typically
> >specific for the target IPAM driver (to an extent they might even
> >identify a specific driver to use), and therefore an API consumer
> >who knows what backend is performing IPAM can surely leverage it.
> >
> >Therefore this would make a lot of sense, assuming API portability
> >and not leaking backend details are not a concern.
> >The Neutron team API & DB lieutenants will be able to provide more
> >input on this regard.
> >
> >In this case other approaches such as a vendor specific extension
> >are not a solution - assuming your granularity level is the
> >allocation pool; indeed allocation pools are not first-class neutron
> >resources, and it is not therefore possible to have APIs which
> >associate vendor specific properties to allocation pools.
> >
> >Salvatore
> >
> >On 4 November 2015 at 21:46, Shraddha Pandhe
> >mailto:spandhe.openst...@gmail.com>>
> >wrote:
> >
> >Hi folks,
> >
> >I have a small question/suggestion about IPAM.
> >
> >With IPAM, we are allowing users to have their own IPAM drivers
> >so that they can manage IP allocation. The problem is, the new
> >ipam tables in the database have the same columns as the old
> >tables. So, as a user, if I want to have my own logic for ip
> >allocation, I can't actually get any help from the database.
> >Whereas, if we had an arbitrary json blob in the ipam tables, I
> >could put any useful information/tags there, that can help me
> >for allocation.
> >
> >Does this make sense?
> >
> >e.g. If I want to create multiple allocation pools in a subnet
> >and use them for different purposes, I would need some sort of
> >tag for each allocation pool for identification. Right now,
> >there is no scope for doing something like that.
> >
> >Any thoughts? If there are any other way to solve the problem,
> >please let me know
> >
> >
> >
> >
> >
> > __
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> > 
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > __
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> >http://lists.openstack.org/cg

Re: [openstack-dev] [Openstack-operators] [nova] can we deprecate the xvp console?

2015-11-05 Thread Bob Ball
> I noticed today that nova.console.xvp hits the database directly for
> console pools. We should convert this to objects so that the console
> service does not have direct access to the database (this is the only
> console I see that hits the database directly). However, rather than go
> through the work of create an object for ConsolePools, if no one is
> using xvp consoles in nova then we could deprecate it.

I believe that deprecating the XVP consoles would be the better move; XVP has 
not been maintained for 2 years (https://github.com/xvpsource/xvp/) and 
standard XenServer OpenStack installations do not use the XVP console.

Bob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Jay Pipes

On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:

Hi Salvatore,

Thanks for the feedback. I agree with you that arbitrary JSON blobs will
make IPAM much more powerful. Some other projects already do things like
this.


:( Actually, though "powerful" it also leads to implementation details 
leaking directly out of the public REST API. I'm very negative on this 
and would prefer an actual codified REST API that can be relied on 
regardless of backend driver or implementation.



e.g. In Ironic, node has driver_info, which is JSON. it also has an
'extras' arbitrary JSON field. This allows us to put any information in
there that we think is important for us.


Yeah, and this is a bad thing, IMHO. Public REST APIs should be 
structured, not a Wild West free-for-all. The biggest problem with using 
free-form JSON blobs in RESTful APIs like this is that you throw away 
the ability to evolve the API in a structured, versioned way. Instead of 
evolving the API using microversions, instead every vendor just jams 
whatever they feel like into the JSON blob over time. There's no way for 
clients to know what the server will return at any given time.


Achieving consensus on a REST API that meets the needs of a variety of 
backend implementations is *hard work*, yes, but it's what we need to do 
if we are to have APIs that are viewed in the industry as stable, 
discoverable, and reliably useful.


Best,
-jay

Best,
-jay


Hoping to get some positive feedback from API and DB lieutenants too.


On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
mailto:salv.orla...@gmail.com>> wrote:

Arbitrary blobs are a powerful tools to circumvent limitations of an
API, as well as other constraints which might be imposed for
versioning or portability purposes.
The parameters that should end up in such blob are typically
specific for the target IPAM driver (to an extent they might even
identify a specific driver to use), and therefore an API consumer
who knows what backend is performing IPAM can surely leverage it.

Therefore this would make a lot of sense, assuming API portability
and not leaking backend details are not a concern.
The Neutron team API & DB lieutenants will be able to provide more
input on this regard.

In this case other approaches such as a vendor specific extension
are not a solution - assuming your granularity level is the
allocation pool; indeed allocation pools are not first-class neutron
resources, and it is not therefore possible to have APIs which
associate vendor specific properties to allocation pools.

Salvatore

On 4 November 2015 at 21:46, Shraddha Pandhe
mailto:spandhe.openst...@gmail.com>>
wrote:

Hi folks,

I have a small question/suggestion about IPAM.

With IPAM, we are allowing users to have their own IPAM drivers
so that they can manage IP allocation. The problem is, the new
ipam tables in the database have the same columns as the old
tables. So, as a user, if I want to have my own logic for ip
allocation, I can't actually get any help from the database.
Whereas, if we had an arbitrary json blob in the ipam tables, I
could put any useful information/tags there, that can help me
for allocation.

Does this make sense?

e.g. If I want to create multiple allocation pools in a subnet
and use them for different purposes, I would need some sort of
tag for each allocation pool for identification. Right now,
there is no scope for doing something like that.

Any thoughts? If there are any other way to solve the problem,
please let me know





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.

[openstack-dev] [nova] can we deprecate the xvp console?

2015-11-05 Thread Matt Riedemann
I noticed today that nova.console.xvp hits the database directly for 
console pools. We should convert this to objects so that the console 
service does not have direct access to the database (this is the only 
console I see that hits the database directly). However, rather than go 
through the work of create an object for ConsolePools, if no one is 
using xvp consoles in nova then we could deprecate it.


It looks like it was added back in diablo [1] (at least).

Someone from Rackspace in IRC said that they weren't using it, so given 
it's for xenserver I assume that means probably no one is using it, but 
we need to ask first.


Please respond else I'll probably move forward with deprecation at some 
point in mitaka-1.


[1] 
https://github.com/openstack/nova/commit/b437a98738c7a564205d1b27e36b844cd54445d1


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-05 Thread Paul Michali
Appreciate all the work Edgar!

Regards,

PCM

On Thu, Nov 5, 2015 at 11:15 AM Miguel Lavalle  wrote:

> Hey Paisano,
>
> Thanks for your great contributions.
>
> Un abrazo
>
> On Wed, Nov 4, 2015 at 6:28 PM, Edgar Magana 
> wrote:
>
>> Dear Colleagues,
>>
>> I have been part of this community from the very beginning when in Santa
>> Clara, CA back in 2011 a bunch of we crazy people decided to work on this
>> networking project.
>> Neutron has become is a very unique piece of code and it requires an
>> approval team that will always be on the top of everything, this is why I
>> would like to communicate you that I decided to step down as Neutron Core.
>>
>> These are not breaking news for many of you because I shared this thought
>> during the summit in Tokyo and now it is a commitment. I want to let you
>> know that I learnt a lot from you and I hope my comments and reviews never
>> offended you.
>>
>> I will be around of course. I will continue my work on code reviews and
>> coordination on the Networking Guide.
>>
>> Thank you all for your support and good feedback,
>>
>> Edgar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-05 Thread Miguel Lavalle
Hey Paisano,

Thanks for your great contributions.

Un abrazo

On Wed, Nov 4, 2015 at 6:28 PM, Edgar Magana 
wrote:

> Dear Colleagues,
>
> I have been part of this community from the very beginning when in Santa
> Clara, CA back in 2011 a bunch of we crazy people decided to work on this
> networking project.
> Neutron has become is a very unique piece of code and it requires an
> approval team that will always be on the top of everything, this is why I
> would like to communicate you that I decided to step down as Neutron Core.
>
> These are not breaking news for many of you because I shared this thought
> during the summit in Tokyo and now it is a commitment. I want to let you
> know that I learnt a lot from you and I hope my comments and reviews never
> offended you.
>
> I will be around of course. I will continue my work on code reviews and
> coordination on the Networking Guide.
>
> Thank you all for your support and good feedback,
>
> Edgar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Today's IRC meeting

2015-11-05 Thread Tripp, Travis S
Hello all,

The US time change while many of us were still getting home from Japan threw 
myself and several others off with today’s meeting time. Sorry about that! 
We’ll pick back up next week.  Next week’s agenda can be found at the below 
link.  Please feel free to add to to it / modify it and let’s talk in the IRC 
room more prior to it. Primarily, we need to continue reviewing and 
prioritizing Mitaka work.

https://etherpad.openstack.org/p/search-team-meeting-agenda

Thanks,
Travis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] appropriate location for docker image uploading

2015-11-05 Thread Brad P. Crochet
On Tue, Nov 3, 2015 at 2:54 PM, Jeff Peeler  wrote:
> I'm looking at introducing the ability for tripleoclient to upload
> docker images into a docker registry (planning for it to be installed
> in the undercloud [1]). I wanted to make sure something like this
> would be accepted or get suggestions on an alternate approach.
> Ultimately may end up looking something like the patch below, which
> I'm still waiting for further feedback on:
> https://review.openstack.org/#/c/239090/
>
>
> [1] https://review.openstack.org/#/c/238238/

Rather than continue on that code path, I would rather see the image
loading be done in tripleo-common similarly to the load-images script
in tripleo-incubator. The image building is already implemented [1],
but not yet merged. We want the yaml to drive the build/load, rather
than hard-code how we handle the images.

[1] https://review.openstack.org/#/c/235569/

-- 
Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS
Principal Software Engineer
(c) 704.236.9385 (w) 919.301.3231

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][kolla] Summary from Mitaka Summit for Kolla

2015-11-05 Thread Paul Bourke

Hi all,

I noticed Flavio did one of these for Glance, so I said I'd try 
summarise the work sessions that took place for Kolla at the Mitaka summit.


These are by no means exhaustive or a complete record of everything that 
was discussed, but are more a high level summary of the etherpads from 
each session. Others please feel free to follow up with your own.


Hope it's useful.

-Paul

Wednesday:
=
Documentation
-
https://etherpad.openstack.org/p/kolla-mitaka-documentation

* There has been feedback that the current Kolla documentation is too 
repetitive and not logically structured. Based on this we brainstormed 
on a new structure and set of titles that the docs should be made up of. 
This can be found in the etherpad link above.


* Decided we should not require documentation to be included with every 
patch - patches should not be rejected due to this. However, should make 
liberal use of the DocImpact flag, which will auto generate tickets/bugs 
in launchpad for the docs to be filled in.


* We want to figure out how to make use of the smart yum/apt feature 
used in existing OpenStack docs that allows the same set of docs to be 
generated with commands for various distros.


Diagnostics

https://etherpad.openstack.org/p/kolla-mitaka-diagnostics

* Currently we have rsyslog containers on each node collecting logs from 
each container/service. However, the logging config in these services is 
suboptimal, and we're not getting error logs. Also, non OpenStack 
services such as mariadb, rabbitmq, etc are not logging to syslog at all.


* Central logging needs to happen in Mitaka. Need to evaluate 
elkstack/efkstack for this. Pad highlights various requirements and 
caveats to watch out for when implementing central logging.


* People like the idea of a toolbox container, which has a variety of 
useful tools for interacting with a cluster. We may also like to 
distribute a wrapper script for calling this in a more user friendly way 
(e.g. 'docker-ostk nova list' instead of 'docker run -it  nova 
list'


Bare Metal Deployment
-
https://etherpad.openstack.org/p/kolla-mitaka-bare-metal-deployment

* Currently we suffer from the problem that in order to deploy Kolla you 
first need to manually init the host nodes which can take anywhere from 
3+ hours. We discussed some existing solutions for this which mainly 
boiled down to either bifrost or instack. Both have their pros and cons 
outlined in the etherpad.


* Outcome is to evaluate both of these solutions and hope to document 
solutions for both. To support the bare metal config we may also aim to 
create a separate small playbook which can do basic config on the nodes 
once provisioned (installed docker-py, firewall config, etc).


Mitaka Roadmap
--
https://etherpad.openstack.org/p/Kolla-Mitaka-Roadmap

* In this session we brainstormed on ideas and work that we would like 
to see scheduled for the Mitaka cycle. Topics included deploying the 
rest of the big tent, a better plugin architecture (horizon, neutron, 
etc), and alternative forms of deployment, i.e. Apache Mesos.


Operator Requirements
-
https://etherpad.openstack.org/p/kolla-mitaka-operator-requirements-gathering

* We didn't get as many ops as we'd hoped for this session but still 
gathered some very useful feedback. Amongst the top requests were a 
mechanism for service plugins (which was already in the mitaka roadmap 
session), a way to customise service start scripts, and a way to bind 
mount local repos into containers to make kolla a more viable dev 
environment.


* Pain points included the fact that restart the docker daemon restarts 
all containers, and some other open bugs in Docker itself that hits 
operators such as sometimes not being able to delete containers.


* Source installs were echoed as been a big driver for Kolla adoption - 
people don't like packages and even more so when they have to build them 
themselves.


Thursday

Gating Commits
---
https://etherpad.openstack.org/p/kolla-mitaka-gating

* Currently we have gates for building + deploying "centos 
binary+source", and "ubuntu source". We don't deploy every container in 
the gate, just a subset to save time. This excercises the Docker build 
and Ansible deploy code. We also want to add gates for the rest of our 
included OSes which include oraclelinux and RHEL.


* Our next goal is to exercise the OpenStack services/apis themselves, 
in the gate. There is work under way to add tempest runs to the gate, 
however inc0 also suggested that we should have some more simple smoke 
tests present in order to fail more quickly. Frustration can arise from 
waiting for a full deploy only to find out that one of the core 
containers such as Keystone went into a failed state right at the 
beggining of the deploy. Will investigate how we can use the existing 
kolla-ansible container possibly along with shade to implement this.


Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-05 Thread Carl Baldwin
Edgar,

It is great working with you.  You've done so much.

Carl

On Wed, Nov 4, 2015 at 5:28 PM, Edgar Magana  wrote:
> Dear Colleagues,
>
> I have been part of this community from the very beginning when in Santa
> Clara, CA back in 2011 a bunch of we crazy people decided to work on this
> networking project.
> Neutron has become is a very unique piece of code and it requires an
> approval team that will always be on the top of everything, this is why I
> would like to communicate you that I decided to step down as Neutron Core.
>
> These are not breaking news for many of you because I shared this thought
> during the summit in Tokyo and now it is a commitment. I want to let you
> know that I learnt a lot from you and I hope my comments and reviews never
> offended you.
>
> I will be around of course. I will continue my work on code reviews and
> coordination on the Networking Guide.
>
> Thank you all for your support and good feedback,
>
> Edgar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-05 Thread Jim Rollenhagen
On Wed, Nov 04, 2015 at 12:19:37PM -0800, Jim Rollenhagen wrote:
[snip]

> Yeah, no worries there. So you're good with unreleased changes just
> being 3 months, no cycle boundaries? If so, I'll push up a change to the
> governance repo for that.

That change is here: https://review.openstack.org/#/c/242117/

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Andrew Laski

On 11/05/15 at 10:12am, Jonathan D. Proulx wrote:

On Thu, Nov 05, 2015 at 09:33:37AM -0500, Andrew Laski wrote:

:Can you be a little more specific on what API difference is important
:to you?  There are two differences currently between migrate and
:resize in the API:
:
:1. There is a different policy check, but this only really protects
:the next bit.
:
:2. Resize passes in a new flavor and migration does not.
:
:Both actions result in an instance being scheduled to a new host.  If
:they were consolidated into a single action with a policy check to
:enforce that users specified a new flavor and admins could leave that
:off would that be problematic for you?

My typical use is live-migration (perhaps that is yet another code
path?) which involves:


Yes, live-migration is completely separate from resize/(cold)migrate.  
I'm not convinced that we can or should consolidate live-migration with 
resize/cold-migrate.




3. specify the host to migrate to

This is what I really want to protect.

my use case if it helps:

The reason I want to specify the host (or if I could even better a
host aggregaate) is that I use 'cpu_mode=host-passthrough' and have a
few generations of hardware (and my instance types are not constrained
to a particular generation which I realize is an option as we do that
for other purposes) so left to the scheduler it might try to
live-migrate to an older cpu generation which would fail so we'r
ecurrently using human intelligence to try to migrate to same
generation and if that's full move newer.

This is an uncommon but important procedure mostly used for updates that
require hypervisor reboot in which we roll everything from node-0 to
node-N, update 0 then roll node-1 to node0 etc ...

If I could constrain migration by host aggregate in ways that didn't
map to instance type metadata constraints that would simplify this,
but the current situation is adequate for me.

This isn't an issue with non-live migration or rezize neither of which
requite CPU consistency.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][bugs] Developers Guide: Who's merging that?

2015-11-05 Thread Markus Zoeller
Hey folks,

some months ago I wrote down all the things a developer should know
about the bug handling process in general [1]. It is written as a
project agnostic thing and got some +1s but it isn't merged yet.
It would be helpful when I could use it to give this as a pointer 
to new contributors as I'm under the impression that the mental image
differs a lot among the contributors. So, my questions are:

1) Who's in charge of merging such non-project-specific things?
2) Did I miss some important things in the commit?

[1] https://review.openstack.org/#/c/192232/

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Mitaka Priorities and Deadlines

2015-11-05 Thread John Garbutt
Hi,

Here is a catch up on a few release details...

For Nova specific deadlines and dates, please see:
https://wiki.openstack.org/wiki/Nova/Mitaka_Release_Schedule

Please note:
Dec 3: spec and blueprint freeze
Jan 21: non-priority feature freeze

The list of Mitaka priorities can be found here:
http://specs.openstack.org/openstack/nova-specs/priorities/mitaka-priorities.html

This was discussed at the summit, with a follow up in gerrit for those
who were unable to make it to the summit.


Summit
---

You can find out what happend at the design summit in the etherpads:
https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Nova

Lots of good debate on specific specs or ideas, and the best way
forward. No major changes in direction, as you can see in the
priorities list above. I hope I will get time to do a blog post
summary next week, unless someone beats me to it (please, hint, hint).


Blueprints
--

To get your -2 removed, you need to get your blueprint approved for Mitaka.

To get a spec-less blueprint approved, please add it in this list:
https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking

We are also using the above to try and categorise the specs, in an
attempt to merge the higher priority specs first.

More details see:
https://wiki.openstack.org/wiki/Nova/Process


Reviews


Please focus on code reviews linked in here:
https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking

As last release, subgroups work to priorities and review patches that
interest them. Once they are happy, they recommend them for merge to
nova-core.

For example, there is a new ops sub group forming to look at finding
and reviewing the most important fixes and features from an operator
perspective. Hopefully there will be many other groups active this
release.


As usual, any questions, do email me, or catch me on IRC.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Pagination in thre API

2015-11-05 Thread Everett Toews
On Nov 5, 2015, at 5:44 AM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:

On 5 November 2015 at 09:46, Richard Jones 
mailto:r1chardj0...@gmail.com>> wrote:
As a consumer of such APIs on the Horizon side, I'm all for consistency in
pagination, and more of it, so yes please!

On 5 November 2015 at 13:24, Tony Breeds 
mailto:t...@bakeyournoodle.com>> wrote:

On Thu, Nov 05, 2015 at 01:09:36PM +1100, Tony Breeds wrote:
Hi All,
   Around the middle of October a spec [1] was uploaded to add
pagination
support to the os-hypervisors API.  While I recognize the use case it
seemed
like adding another pagination implementation wasn't an awesome idea.

Today I see 3 more requests to add pagination to APIs [2]

Perhaps I'm over thinking it but should we do something more strategic
rather
than scattering "add pagination here".

+1

The plan, as I understand it, is to first finish off this API WG guideline:
http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html


An attempt at an API guideline for pagination is here [1] but hasn't received 
any updates in over a month, which can be understandable as sometimes other 
work takes precedence.

Perhaps we can get that guideline moving again?

If it's becoming difficult to reach agreement on that approach in the 
guideline, it could be worthwhile to take a step back and do some analysis on 
the way pagination is done in the more established APIs. I've found that such 
analysis can be very helpful as you're moving forward from a known state.

The place for that analysis is in Current Design [2] by filling in the 
Pagination page. You can find many examples of such analysis from the Current 
Design like Sorting [3].

Cheers,
Everett


[1] https://review.openstack.org/#/c/190743/
[2] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design
[3] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Sorting

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][neutron] Kilo is 'security-supported'. What does it imply?

2015-11-05 Thread Ihar Hrachyshka

Hi all,

there is contradictory info about what we do with kilo now that it’s not  
the latest stable release.


- An old email thread [1] suggested that the branch can still receive all  
kinds of bug fixes as long as corresponding project teams want to spend  
time on it: "expanding the support scope for N-1 stable branch is fine if  
we can deliver it”; "IIRC "current stable release" was originally defined  
by markmc as the branch where stable-maint team proactively proposes  
backports by monitoring the trunk, but we have lost that mode long ago,  
backports are now done retroactively after bugs are reported.”


- Releases page on wiki [2] calls the branch ‘Security-supported’ (and it’s  
not clear what it implies)


- StableBranch page though requires that we don’t merge non-critical bug  
fixes there: "Only critical bugfixes and security patches are acceptable”


Some projects may want to continue backporting reasonable (even though  
non-critical) fixes to older stable branches. F.e. in neutron, I think  
there is will to continue providing backports for the branch.


I wonder though whether we would not break some global openstack rules by  
continuing with those backports. Are projects actually limited about what  
types of bug fixes are supposed to go in stable branches, or we embrace  
different models of stable maintenance and allow for some freedom per  
project?


[1]  
http://lists.openstack.org/pipermail/openstack-stable-maint/2014-July/002404.html

[2] https://wiki.openstack.org/wiki/Releases
[3] https://wiki.openstack.org/wiki/StableBranch#Support_phases

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Jonathan D. Proulx
On Thu, Nov 05, 2015 at 09:33:37AM -0500, Andrew Laski wrote:

:Can you be a little more specific on what API difference is important
:to you?  There are two differences currently between migrate and
:resize in the API:
:
:1. There is a different policy check, but this only really protects
:the next bit.
:
:2. Resize passes in a new flavor and migration does not.
:
:Both actions result in an instance being scheduled to a new host.  If
:they were consolidated into a single action with a policy check to
:enforce that users specified a new flavor and admins could leave that
:off would that be problematic for you?

My typical use is live-migration (perhaps that is yet another code
path?) which involves:

3. specify the host to migrate to

This is what I really want to protect.

my use case if it helps:

The reason I want to specify the host (or if I could even better a
host aggregaate) is that I use 'cpu_mode=host-passthrough' and have a
few generations of hardware (and my instance types are not constrained
to a particular generation which I realize is an option as we do that
for other purposes) so left to the scheduler it might try to
live-migrate to an older cpu generation which would fail so we'r
ecurrently using human intelligence to try to migrate to same
generation and if that's full move newer.

This is an uncommon but important procedure mostly used for updates that
require hypervisor reboot in which we roll everything from node-0 to
node-N, update 0 then roll node-1 to node0 etc ...

If I could constrain migration by host aggregate in ways that didn't
map to instance type metadata constraints that would simplify this,
but the current situation is adequate for me.

This isn't an issue with non-live migration or rezize neither of which
requite CPU consistency.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-05 Thread Everett Toews
On Nov 3, 2015, at 11:46 PM, John Griffith 
mailto:john.griffi...@gmail.com>> wrote:

On Tue, Nov 3, 2015 at 4:57 PM, michael mccune 
mailto:m...@redhat.com>> wrote:
On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?

Thoughts?

not sure i understand the resource status by UUID, could you explain that a 
little more.

as for changing the get request to return only the status, can't you have a 
filter on the get url that instructs it to return only the status?

​Yes, we already have that capability and it's used in a number of places.​



Relevant API guideline

http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering

Everett
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-05 Thread Javeria Khan
Hi Evgeniy,

>
> 1. what version of Fuel do you use?
>
Using 7.0


> 2. could you please clarify what did you mean by "moving to
> deployment_tasks.yaml"?
>
I tried changing my tasks.yaml to a deployment_tasks.yaml as the wiki
suggests for 7.0. However I kept hitting issues.


> 3. could you please describe your use-case a bit more? Why do you want to
> run
> tasks on the host itself?
>

I have a monitoring tool that accompanies my new plugin, which basically
uses a config file that contains details about the cluster (IPs, VIPs,
networks etc). This config file is typically created on the installer nodes
through the deployment, Fuel Master in this case.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable] tools for keeping up with stable/liberty releases

2015-11-05 Thread Doug Hellmann
Release liaisons,

As described in [1], we are changing our stable release policy for
Liberty to encourage projects to tag new releases when they have
patches ready to be released. There is a script in the
openstack-infra/release-tools repository to make it easier to keep
track of what has not yet been released.

The list_unreleased_changes.sh script takes 2 arguments, the branch name
and the repository name(s). It clones a temporary copy of the
repositories and looks for changes since the last tag on the given
branch.

For example:

  $ ./list_unreleased_changes.sh stable/liberty openstack/glance
  
  [ Cloning openstack/glance ]
  INFO:zuul.CloneMapper:Workspace path set to: 
/mnt/projects/release-tools/release-tools/list-unreleased-kFv
  INFO:zuul.CloneMapper:Mapping projects to workspace...
  INFO:zuul.CloneMapper:  openstack/glance -> 
/mnt/projects/release-tools/release-tools/list-unreleased-kFv/openstack/glance
  INFO:zuul.CloneMapper:Expansion completed.
  INFO:zuul.Cloner:Preparing 1 repositories
  INFO:zuul.Cloner:Creating repo openstack/glance from upstream 
git://git.openstack.org/openstack/glance
  INFO:zuul.Cloner:upstream repo has branch stable/liberty
  INFO:zuul.Cloner:Falling back to branch stable/liberty
  INFO:zuul.Cloner:Prepared openstack/glance repo with branch stable/liberty
  INFO:zuul.Cloner:Prepared all repositories
  Creating a git remote called "gerrit" that maps to:
ssh://doug-hellm...@review.openstack.org:29418/openstack/glance.git
  
  [ Unreleased changes in openstack/glance ]
  
  Changes in glance 11.0.0..a50026b
  -
  53d48d8 2015-11-03 18:00:26 + add first reno-based release note
  4a31949 2015-11-03 18:00:26 + set default branch for git review
  aae81e2 2015-10-23 15:52:53 + Updated from global requirements
  b977544 2015-10-22 07:03:47 + Pass CONF to logging setup
  25ead6a 2015-10-18 10:12:26 + Fixed registry invalid token exception 
handling
  5434297 2015-10-17 10:40:42 + Updated from global requirements
  8902d12 2015-10-16 10:34:46 + Decrease test failure if second changes 
during run
  7158d78 2015-10-15 15:43:15 +0200 Switch to post-versioning
  
  [ Cleaning up ]

When you decide that it is time to prepare a new release, submit a patch
to the openstack/releases repository with the SHA, version, and other
info. See the README there for more details.

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-November/078281.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Andrew Laski

On 11/05/15 at 01:28pm, Murray, Paul (HP Cloud) wrote:




From: Ed Leafe [mailto:e...@leafe.com]
On Nov 5, 2015, at 2:43 AM, Tang Chen  wrote:

> I'm sorry that I cannot understand why resize and migrate are the same
thing behind.

Resize is essentially a migration to the same host, rather than a different
host. The process is still taking an existing VM and using it to create another
VM that appears to the user as the same (ID, networking, attached volumes,
metadata, etc.)





Or more specifically, the migrate and resize API actions both call the resize
function in the compute api. As Ed said, they are basically the same behind
the scenes. (But the API difference is important.)


Can you be a little more specific on what API difference is important to 
you?  There are two differences currently between migrate and resize in 
the API:


1. There is a different policy check, but this only really protects the 
next bit.


2. Resize passes in a new flavor and migration does not.

Both actions result in an instance being scheduled to a new host.  If 
they were consolidated into a single action with a policy check to 
enforce that users specified a new flavor and admins could leave that 
off would that be problematic for you?





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-onos]: Proposing new cores for networking-onos

2015-11-05 Thread Kyle Mestery
+1 for both.

On Thu, Nov 5, 2015 at 7:00 AM, Gal Sagie  wrote:

> +1 for both from me
>
> On Thu, Nov 5, 2015 at 2:53 PM, Vikram Choudhary 
> wrote:
>
>> Hi All,
>>
>> I would like to propose Mr. Ramanjaneya Reddy Palleti and Mr. Dongfeng as
>> new cores for networking-onos project. Their contribution was significant
>> in the last Liberty cycle w.r.t to this project.
>>
>> *Facts:*
>> http://stackalytics.com/?metric=loc&module=networking-onos&release=all
>>
>> Request existing cores to vote for this proposal.
>>
>> Thanks
>> Vikram
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Prathyusha Guduri
Thanks Mooney,

will correct my localrc and run again

On Thu, Nov 5, 2015 at 6:23 PM, Mooney, Sean K  wrote:
> Hello
> When set OVS_DPDK_MODE=controller_ovs
>
> You are disabling install of ovs-dpdk on the contoler node and only 
> installing mechanism driver.
>
> If you want to install ovs-dpdk on the controller node you should set this 
> value as follows
>
> OVS_DPDK_MODE=controller_ovs_dpdk
>
> See
> https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node
>
> ovs with dpdk will be installed in /usr/bin not user local bin as it does a 
> system wide install not a local install.
>
> Installation documentation can be found here
> https://github.com/openstack/networking-ovs-dpdk/tree/master/doc/source
>
> the networking-ovs-dpdk repo has been recently moved from stackforge to the 
> openstack namespace following the
> retirement of stackforge.
>
> Some like in the git repo still need to be updated to reflect this change.
>
> Regards
> sean
> -Original Message-
> From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
> Sent: Thursday, November 5, 2015 11:02 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [networking-ovs-dpdk]
>
> Hello all,
>
> Trying to install openstack with ovs-dpdk driver from devstack.
>
> Following is my localrc file
>
> HOST_IP_IFACE=eth0
> HOST_IP=10.0.2.15
> HOST_NAME=$(hostname)
>
> DATABASE_PASSWORD=open
> RABBIT_PASSWORD=open
> SERVICE_TOKEN=open
> SERVICE_PASSWORD=open
> ADMIN_PASSWORD=open
> MYSQL_PASSWORD=open
> HORIZON_PASSWORD=open
>
>
> enable_plugin networking-ovs-dpdk
> https://github.com/stackforge/networking-ovs-dpdk master 
> OVS_DPDK_MODE=controller_ovs
>
> disable_service n-net
> disable_service n-cpu
> enable_service neutron
> enable_service q-svc
> enable_service q-agt
> enable_service q-dhcp
> enable_service q-l3
> enable_service q-meta
> enable_service n-novnc
>
> DEST=/opt/stack
> SCREEN_LOGDIR=$DEST/logs/screen
> LOGFILE=${SCREEN_LOGDIR}/xstack.sh.log
> LOGDAYS=1
>
> Q_ML2_TENANT_NETWORK_TYPE=vlan
> ENABLE_TENANT_VLANS=True
> ENABLE_TENANT_TUNNELS=False
>
> #Dual socket platform with 16GB RAM,3072*2048kB hugepages leaves ~4G for the 
> system.
> OVS_NUM_HUGEPAGES=2048
> #Dual socket platform with 64GB RAM,14336*2048kB hugepages leaves ~6G for the 
> system.
> #OVS_NUM_HUGEPAGES=14336
>
> OVS_DATAPATH_TYPE=netdev
> OVS_LOG_DIR=/opt/stack/logs
> OVS_BRIDGE_MAPPINGS=public:br-ex
>
> ML2_VLAN_RANGES=public:100:200
> MULTI_HOST=1
>
> #[[post-config|$NOVA_CONF]]
> #[DEFAULT]
> firewall_driver=nova.virt.firewall.NoopFirewallDriver
> novncproxy_host=0.0.0.0
> novncproxy_port=6080
> scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter
>
>
> After running ./stack.sh which was sucessful , I could see that in 
> ml2.conf.ini file ovsdpdk was added as the mechanism driver. But the agent 
> running was still openvswitch. Tried running ovsdpdk on q-agt screen, but 
> failed because ovsdpdk was not installed in /usr/local/bin, which I thought 
> devstack is supposed to do.
> Tried running setup.py in networking-ovs-dpdk folder, but that also did not 
> install ovs-dpdk in /usr/local/bin.
>
> Am stuck here. Please guide me how to proceed further. Also the Readme in 
> networking-ovs-dpdk folder says the instructions regarding installation are 
> available in below links - 
> http://git.openstack.org/cgit/stackforge/networking-ovs-dpdk/tree/doc/source/installation.rst
>
> But no repos found there. Kindly guide me to a doc or something on how to 
> build ovs-dpdk from devstack
>
> Thank you,
> Prathyusha
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-21, Nov 9-13

2015-11-05 Thread Doug Hellmann
This is the first in a series of email reminders about important dates
on the schedule as we work towards the Mitaka release. We will be
counting down from R-23, the Mitaka summit, to the release in R-0 the
week of April 4-8. If all goes as planned, these emails will be sent
just before the week mentioned in the subject (on my Thursday, but some
of you live in the future). I don't plan to send email every week, and I
will try to keep each one short.

For release liaisons and PTLs, these emails take the place of the 1-on-1
synchronization meetings we held during the Liberty cycle, and will
frequently contain reminders for actions that need to be taken to move
the release forward.

Focus
-

We are currently working towards the Mitaka 1 milestone. Teams should be
focusing on wrapping up incomplete work left over from the end of the
Liberty cycle, finalizing and announcing plans from the summit, and
completing specs and blueprints.

Release Actions
---

All deliverables should have reno configured before Mitaka 1. See
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html
for details, and follow up on that thread with questions.

Review stable/liberty branches for patches that have landed since the last
release and determine if your deliverables need new tags.

Important Dates
---

Mitaka 1 - Dec 1-3 (3 weeks away)

Mitaka release schedule: https://wiki.openstack.org/wiki/Mitaka_Release_Schedule

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Every bug in 8.0 milestone should be marked with area tag

2015-11-05 Thread Dmitry Pyzhov
Guys,

our assignee-based method of guessing bug area has failed. From now on
every bug should has one and only one area tag. You can find explicit list
of tags here: https://wiki.openstack.org/wiki/Fuel/Bug_tags#Area_tags

Here is full list of bugs without area tags
.
You can find this link on
https://wiki.openstack.org/wiki/Fuel/Bug_tags#Area_tags wiki page. Please
add area tags during bugs triage.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] proposal to add Rohit Jaiswal to Ceilometer core

2015-11-05 Thread ZhiQiang Fan
+1

On Thu, Nov 5, 2015 at 9:45 PM, gord chung  wrote:

> hi folks,
>
> i'd like to nominate Rohit Jaiswal as core for Ceilometer. he's done a lot
> of good work recently like discovering and fixing many issues with Events
> and implemented the configuration reloading functionality. he's also been
> very active providing input and fixes for many bugs.
>
> as we've been doing, please vote here:
> https://review.openstack.org/#/c/242058/
>
> reviews:
>
> https://review.openstack.org/#/q/reviewer:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/ceilometer,n,z
>
> patches:
>
> https://review.openstack.org/#/q/owner:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/ceilometer,n,z
>
> https://review.openstack.org/#/q/owner:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/python-ceilometerclient,n,z
>
> cheers,
>
> --
> gord
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] proposal to add Rohit Jaiswal to Ceilometer core

2015-11-05 Thread gord chung

hi folks,

i'd like to nominate Rohit Jaiswal as core for Ceilometer. he's done a 
lot of good work recently like discovering and fixing many issues with 
Events and implemented the configuration reloading functionality. he's 
also been very active providing input and fixes for many bugs.


as we've been doing, please vote here: 
https://review.openstack.org/#/c/242058/


reviews:
https://review.openstack.org/#/q/reviewer:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/ceilometer,n,z

patches:
https://review.openstack.org/#/q/owner:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/ceilometer,n,z
https://review.openstack.org/#/q/owner:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/python-ceilometerclient,n,z

cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Chris Dent

On Thu, 5 Nov 2015, Sean Dague wrote:

On 11/05/2015 03:08 AM, Chris Dent wrote:

Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
tooz for coordinating group partitioning in active-active HA setups
and shared locks. Again the standard deploy for that has been to use
redis because of availability. It's fairly understood that zookeeper
would be more correct but there are packaging concerns.


What are the packaging concerns for zookeeper?


I had thought there were generic issues with RPMs of Java-based packages
but I'm able to find RPMs of zookeeper for recent Fedoras[1] so I guess
the concerns are either moot or nearly so. What this means for RHEL or
CentOS I've never been too sure about.

[1] http://rpmfind.net/linux/rpm2html/search.php?query=zookeeper

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Mooney, Sean K
Integration with packsack is not currently supported.
We currently have the first step( a puppet module to install ovs-dpdk) under 
review.
At present we are not directly targeting packstack support but if anyone wants 
to add support
It would be welcomed. At present devstack is the only fully supported 
deployment tool.

Support for Ubuntu 14.04 and centos 7.1 was recently added.
Automated Testing is done by the intel-networking-ci using fedora 21 but
We have manually tested Ubuntu and centos.

I currently have a ovs-dpdk deployed on Ubuntu 14.04 on one of my dev systems 
using the devstack.

Our current getting start guide just describes fedora 21 deployment but we 
should be adding a Ubuntu and centOS version soon.

As far as I recall the main changes for Ubuntu are

-  Instead of setting selinux to permissive uninstall apparmor.

-  Instead of enabling the virt preview repo enable the kilo Ubuntu  
cloud archive.

Note that as devstack installs from source the openstack packages from the kilo 
cloud archive
are not used it is enabled to provided updated Libvirt and qemu packages only.
As such kilo,liberty and master openstack should be deployable by devstack.

It should also be noted that due to changes in upstream neutron the stable/x 
branch of the networking-ovs-dpdk repo
Is only compatible with the stable/x release of opentack.

Regards
Sean.




From: Rapelly, Varun [mailto:vrape...@sonusnet.com]
Sent: Thursday, November 5, 2015 12:04 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-ovs-dpdk]

Hi All,

Can we use https://github.com/openstack/networking-ovs-dpdk with packstack??

I'm trying to configure devstack with ovs-dpdk on ubuntu. But till now no 
success.

Could anybody tell whether it is supported on ubuntu or not? or only on Fedora 
it is tested?


Regards,
Varun

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

2015-11-05 Thread Gary Kotton
Hi,
In Nova the new black is the os-vif-lib 
(https://etherpad.openstack.org/p/mitaka-nova-os-vif-lib). It may be worthwhile 
seeing if we can maybe do something at the same time with the nova crew and 
then bash out the dirty details here. It would be far easier if everyone was in 
the same room.
Just and idea.
Thanks
Gary

From: "John Davidge (jodavidg)" mailto:jodav...@cisco.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, November 5, 2015 at 2:08 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

++

Sounds very sensible to me!

John

From: "Armando M." mailto:arma...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 4 November 2015 21:23
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

Hi folks,

After some consideration, I am proposing a change for the Mitaka release cycle 
in relation to the mid-cycle meetup event.

My proposal is to defer the gathering to later in the release cycle [1], and 
assess whether we have it or not based on the course of events in the cycle. If 
we feel that a last push closer to the end will help us hit some critical 
targets, then I am all in for arranging it.

Based on our latest experiences, I have not seen a strong correlation between 
progress made during the cycle and progress made during the meetup, so we might 
as well save us the trouble of travelling close to Christmas.

I'd like to thank Kyle, Miguel Lavalle and Doug for looking into the logistics. 
We may still need their services later in the new year, but as of now all I can 
say is:

Happy (distributed) hacking!

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Murray, Paul (HP Cloud)


> From: Ed Leafe [mailto:e...@leafe.com]
> On Nov 5, 2015, at 2:43 AM, Tang Chen  wrote:
> 
> > I'm sorry that I cannot understand why resize and migrate are the same
> thing behind.
> 
> Resize is essentially a migration to the same host, rather than a different
> host. The process is still taking an existing VM and using it to create 
> another
> VM that appears to the user as the same (ID, networking, attached volumes,
> metadata, etc.)
> 



Or more specifically, the migrate and resize API actions both call the resize
function in the compute api. As Ed said, they are basically the same behind 
the scenes. (But the API difference is important.)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread Doug Hellmann
Excerpts from Nader Lahouti's message of 2015-11-04 21:25:15 -0800:
> Hi,
> 
> I'm seeing the below warning message continuously:
> 
> 2015-11-04 21:09:38  WARNING [oslo_messaging.server] wait() should have
> been called after stop() as wait() waits for existing messages to finish
> processing, it has been 692.98 seconds and stop() still has not been called
> 
> How to avoid this waring message? Anything needs to be changed when using
> the notification API with the latest oslo_messaging?
> 
> Thanks,
> Nader.

What version of what application is producing the message?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Ed Leafe
On Nov 5, 2015, at 2:43 AM, Tang Chen  wrote:

> I'm sorry that I cannot understand why resize and migrate are the same thing 
> behind.

Resize is essentially a migration to the same host, rather than a different 
host. The process is still taking an existing VM and using it to create another 
VM that appears to the user as the same (ID, networking, attached volumes, 
metadata, etc.)

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-11-05 Thread Thomas Morin

Hi Ihar,

Ihar Hrachyshka :

Reviving the thread.
[...] (I appreciate if someone checks me on the following though):


This is an excellent recap.



 I set up a new etherpad to collect feedback from subprojects [2].


I've filled in details for networking-bgpvpn.
Please tell me if you need more information.



Once we collect use cases there and agree on agent API for extensions 
(even if per agent type), we will implement it and define as stable 
API, then pass objects that implement the API into extensions thru 
extension manager. If extensions support multiple agent types, they 
can still distinguish between which API to use based on agent type 
string passed into extension manager.


I really hope we start to collect use cases early so that we have time 
to polish agent API and make it part of l2 extensions earlier in 
Mitaka cycle.


We'll be happy to validate the applicability of this approach as soon as 
something is ready.


Thanks for taking up this work!

-Thomas




Ihar Hrachyshka  wrote:

On 30 Sep 2015, at 12:53, Miguel Angel Ajo  
wrote:




Ihar Hrachyshka wrote:

On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:

Hi Ihar,

Ihar Hrachyshka :

Miguel Angel Ajo :

Do you have a rough idea of what operations you may need to do?
Right now, what bagpipe driver for networking-bgpvpn needs to 
interact with is:

- int_br OVSBridge (read-only)
- tun_br OVSBridge (add patch port, add flows)
- patch_int_ofport port number (read-only)
- local_vlan_map dict (read-only)
- setup_entry_for_arp_reply method (called to add static ARP 
entries)

Sounds very tightly coupled to OVS agent.
Please bear in mind, the extension interface will be available 
from different agent types
(OVS, SR-IOV, [eventually LB]), so this interface you're 
talking about could also serve as
a translation driver for the agents (where the translation is 
possible), I totally understand
that most extensions are specific agent bound, and we must be 
able to identify

the agent we're serving back exactly.
Yes, I do have this in mind, but what we've identified for now 
seems to be OVS specific.
Indeed it does. Maybe you can try to define the needed pieces in 
high level actions, not internal objects you need to access to. 
Like ‘- connect endpoint X to Y’, ‘determine segmentation id for 
a network’ etc.
I've been thinking about this, but would tend to reach the 
conclusion that the things we need to interact with are pretty 
hard to abstract out into something that would be generic across 
different agents.  Everything we need to do in our case relates to 
how the agents use bridges and represent networks internally: 
linuxbridge has one bridge per Network, while OVS has a limited 
number of bridges playing different roles for all networks with 
internal segmentation.


To look at the two things you  mention:
- "connect endpoint X to Y" : what we need to do is redirect the 
traffic destinated to the gateway of a Neutron network, to the 
thing that will do the MPLS forwarding for the right BGP VPN 
context (called VRF), in our case br-mpls (that could be done with 
an OVS table too) ; that action might be abstracted out to hide 
the details specific to OVS, but I'm not sure on how to  name the 
destination in a way that would be agnostic to these details, and 
this is not really relevant to do until we have a relevant context 
in which the linuxbridge would pass packets to something doing 
MPLS forwarding (OVS is currently the only option we support for 
MPLS forwarding, and it does not really make sense to mix 
linuxbridge for Neutron L2/L3 and OVS for MPLS)
- "determine segmentation id for a network": this is something 
really OVS-agent-specific, the linuxbridge agent uses multiple 
linux bridges, and does not rely on internal segmentation


Completely abstracting out packet forwarding pipelines in OVS and 
linuxbridge agents would possibly allow defining an interface that 
agent extension could use without to know about anything specific 
to OVS or the linuxbridge, but I believe this is a very 
significant taks to tackle.


If you look for a clean way to integrate with reference agents, 
then it’s something that we should try to achieve. I agree it’s not 
an easy thing.


Just an idea: can we have a resource for traffic forwarding, 
similar to security groups? I know folks are not ok with extending 
security groups API due to compatibility reasons, so maybe fwaas is 
the place to experiment with it.


Hopefully it will be acceptable to create an interface, even it 
exposes a set of methods specific to the linuxbridge agent and a 
set of methods specific to the OVS agent.  That would mean that 
the agent extension that can work in both contexts (not our case 
yet) would check the agent type before using the first set or the 
second set.


The assumption of the whole idea of l2 agent extensions is that 
they are agent agnostic. In case of QoS, we implemented a common 
QoS extension that can be plugged in any agent [1], and a set 

Re: [openstack-dev] [networking-onos]: Proposing new cores for networking-onos

2015-11-05 Thread Gal Sagie
+1 for both from me

On Thu, Nov 5, 2015 at 2:53 PM, Vikram Choudhary  wrote:

> Hi All,
>
> I would like to propose Mr. Ramanjaneya Reddy Palleti and Mr. Dongfeng as
> new cores for networking-onos project. Their contribution was significant
> in the last Liberty cycle w.r.t to this project.
>
> *Facts:*
> http://stackalytics.com/?metric=loc&module=networking-onos&release=all
>
> Request existing cores to vote for this proposal.
>
> Thanks
> Vikram
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-05 Thread Akihiro Motoki
2015-11-05 21:30 GMT+09:00 Gal Sagie :
> The current OVS binding proposals are not for nested containers.
> I am not sure if you are asking about that case or about the nested
> containers inside a VM case.
>
> For the nested containers, we will use Neutron solutions that support this
> kind of configuration, for example
> if you look at OVN you can define "parent" and "sub" ports, so OVN knows to
> perform the logical pipeline in the compute host
> and only perform VLAN tagging inside the VM (as Toni mentioned)

I felt that the VLAN-aware VM effort affects many on-going efforts in
Neutron stadium
including Kuryr through the summit discussion.
Let's keep your eyes on VLAN-aware VM effort and your feedback would
be appreciated.
The initial effort is found in
https://review.openstack.org/#/c/210309/ (Trunk port: API extension).

Akihiro


>
> If you need more clarification you can catch me on IRC as well and we can
> talk.
>
> On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary 
> wrote:
>>
>> Hi All,
>>
>> I would appreciate inputs on following queries:
>> 1. Are we assuming nova bm nodes to be docker host for now?
>>
>> If Not:
>>  - Assuming nova vm as docker host and ovs as networking plugin:
>> This line is from the etherpad[1], "Eachdriver would have an
>> executable that receives the name of the veth pair that has to be bound to
>> the overlay" .
>> Query 1:  As per current ovs binding proposals by Feisky[2]
>> and Diga[3], vif seems to be binding with br-int on vm. I am unable to
>> understand how overlay will work. AFAICT , neutron will configure br-tun of
>> compute machines ovs only. How overlay(br-tun) configuration will happen
>> inside vm ?
>>
>>  Query 2: Are we having double encapsulation(both at vm and
>> compute)? Is not it possible to bind vif into compute host br-int?
>>
>>  Query3: I did not see subnet tags for network plugin being
>> passed in any of the binding patches[2][3][4]. Dont we need that?
>>
>>
>> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
>> [2]  https://review.openstack.org/#/c/241558/
>> [3]  https://review.openstack.org/#/c/232948/1
>> [4]  https://review.openstack.org/#/c/227972/
>>
>>
>> -Vikas Choudhary
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Mooney, Sean K
Hello
When set OVS_DPDK_MODE=controller_ovs

You are disabling install of ovs-dpdk on the contoler node and only installing 
mechanism driver.

If you want to install ovs-dpdk on the controller node you should set this 
value as follows

OVS_DPDK_MODE=controller_ovs_dpdk

See 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node

ovs with dpdk will be installed in /usr/bin not user local bin as it does a 
system wide install not a local install.

Installation documentation can be found here
https://github.com/openstack/networking-ovs-dpdk/tree/master/doc/source

the networking-ovs-dpdk repo has been recently moved from stackforge to the 
openstack namespace following the
retirement of stackforge.

Some like in the git repo still need to be updated to reflect this change.

Regards
sean
-Original Message-
From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com] 
Sent: Thursday, November 5, 2015 11:02 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-ovs-dpdk]

Hello all,

Trying to install openstack with ovs-dpdk driver from devstack.

Following is my localrc file

HOST_IP_IFACE=eth0
HOST_IP=10.0.2.15
HOST_NAME=$(hostname)

DATABASE_PASSWORD=open
RABBIT_PASSWORD=open
SERVICE_TOKEN=open
SERVICE_PASSWORD=open
ADMIN_PASSWORD=open
MYSQL_PASSWORD=open
HORIZON_PASSWORD=open


enable_plugin networking-ovs-dpdk
https://github.com/stackforge/networking-ovs-dpdk master 
OVS_DPDK_MODE=controller_ovs

disable_service n-net
disable_service n-cpu
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service n-novnc

DEST=/opt/stack
SCREEN_LOGDIR=$DEST/logs/screen
LOGFILE=${SCREEN_LOGDIR}/xstack.sh.log
LOGDAYS=1

Q_ML2_TENANT_NETWORK_TYPE=vlan
ENABLE_TENANT_VLANS=True
ENABLE_TENANT_TUNNELS=False

#Dual socket platform with 16GB RAM,3072*2048kB hugepages leaves ~4G for the 
system.
OVS_NUM_HUGEPAGES=2048
#Dual socket platform with 64GB RAM,14336*2048kB hugepages leaves ~6G for the 
system.
#OVS_NUM_HUGEPAGES=14336

OVS_DATAPATH_TYPE=netdev
OVS_LOG_DIR=/opt/stack/logs
OVS_BRIDGE_MAPPINGS=public:br-ex

ML2_VLAN_RANGES=public:100:200
MULTI_HOST=1

#[[post-config|$NOVA_CONF]]
#[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter


After running ./stack.sh which was sucessful , I could see that in ml2.conf.ini 
file ovsdpdk was added as the mechanism driver. But the agent running was still 
openvswitch. Tried running ovsdpdk on q-agt screen, but failed because ovsdpdk 
was not installed in /usr/local/bin, which I thought devstack is supposed to do.
Tried running setup.py in networking-ovs-dpdk folder, but that also did not 
install ovs-dpdk in /usr/local/bin.

Am stuck here. Please guide me how to proceed further. Also the Readme in 
networking-ovs-dpdk folder says the instructions regarding installation are 
available in below links - 
http://git.openstack.org/cgit/stackforge/networking-ovs-dpdk/tree/doc/source/installation.rst

But no repos found there. Kindly guide me to a doc or something on how to build 
ovs-dpdk from devstack

Thank you,
Prathyusha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-onos]: Proposing new cores for networking-onos

2015-11-05 Thread Vikram Choudhary
Hi All,

I would like to propose Mr. Ramanjaneya Reddy Palleti and Mr. Dongfeng as
new cores for networking-onos project. Their contribution was significant
in the last Liberty cycle w.r.t to this project.

*Facts:*
http://stackalytics.com/?metric=loc&module=networking-onos&release=all

Request existing cores to vote for this proposal.

Thanks
Vikram
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Sean Dague
On 11/05/2015 03:08 AM, Chris Dent wrote:
> On Thu, 5 Nov 2015, Robert Collins wrote:
> 
>> In the session we were told that zookeeper is already used in CI jobs
>> for ceilometer (was this wrong?) and thats why we figured it made a
>> sane default for devstack.
> 
> For clarity: What ceilometer (actually gnocchi) is doing is using tooz
> in CI (gate-ceilometer-dsvm-integration). And for now it is using
> redis as that was "simple".
> 
> Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
> tooz for coordinating group partitioning in active-active HA setups
> and shared locks. Again the standard deploy for that has been to use
> redis because of availability. It's fairly understood that zookeeper
> would be more correct but there are packaging concerns.

What are the packaging concerns for zookeeper?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >