Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow
Sure, so that helps, except it still has the issue of bumping up against 
the mismatch of the API(s) of nova. This is why I'd rather have a 
template kind of format (as say the input API) that allows for 
(optionally) expressing such container specific capabilities/constraints.


Then some project that can understand that template /format can if 
needed talk to a COE (or similar project) to translate that template 
'segment' into a realized entity using the capabilities/constraints that 
the template specified.


Overall it starts to feel like maybe it is time to change the upper and 
lower systems and shake things up a little ;)


Peng Zhao wrote:

I'd take the idea further. Imagine a typical Heat template, what you
need to do is:

- replace the VM id with Docker image id
- nothing else
- run the script with a normal heat engine
- the entire stack gets deployed in seconds

Done!

Well, that sounds like nova-docker. What about cinder and neutron? They
don't work well with Linux container! The answer is Hypernova
(https://github.com/hyperhq/hypernova) or Intel ClearContainer, seamless
integration with most OpenStack components.

Summary: minimal changes to interface and upper systems, much smaller
image and much better developer workflow.

Peng

-
 Hyper_ Secure Container Cloud



On Wed, Apr 13, 2016 5:23 AM, Joshua Harlow harlo...@fastmail.com
 wrote:

__ Fox, Kevin M wrote: > I think part of the problem is containers
are mostly orthogonal to vms/bare metal. Containers are a package
for a single service. Multiple can run on a single vm/bare metal
host. Orchestration like Kubernetes comes in to turn a pool of
vm's/bare metal into a system that can easily run multiple
containers. > Is the orthogonal part a problem because we have made
it so or is it just how it really is? Brainstorming starts here:
Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified): ---
components: - label: frontend count: 5 image: ubuntu_vanilla
requirements: high memory, low disk stateless: true - label:
database count: 3 image: ubuntu_vanilla requirements: high memory,
high disk stateless: false - label: memcache count: 3 image:
debian-squeeze requirements: high memory, no disk stateless: true -
label: zookeeper count: 3 image: debian-squeeze requirements: high
memory, medium disk stateless: false backend: VM networks: - label:
frontend_net flavor: "public network" associated_with: - frontend -
label: database_net flavor: high bandwidth associated_with: -
database - label: backend_net flavor: high bandwidth and low latency
associated_with: - zookeeper - memchache constraints: - ref:
container_only params: - frontend - ref: no_colocated params: -
database - frontend - ref: spread params: - database - ref:
no_colocated params: - database - frontend - ref: spread params: -
memcache - ref: spread params: - zookeeper - ref: isolated_network
params: - frontend_net - database_net - backend_net ... Now nothing
in the above is about container, or baremetal or vms, (although a
'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...);
instead it's just about the constraints that a user has on there
deployment and the components associated with it. It can be left up
to some consuming project of that format to decide how to turn that
desired description into an actual description (aka a full expanding
of that format into an actual deployment plan), possibly say by
optimizing for density (packing as many things container) or
optimizing for security (by using VMs) or optimizing for performance
(by using bare-metal). > So, rather then concern itself with
supporting launching through a COE and through Nova, which are two
totally different code paths, OpenStack advanced services like Trove
could just use a Magnum COE and have a UI that asks which existing
Magnum COE to launch in, or alternately kick off the "Launch new
Magnum COE" workflow in horizon, then follow up with the Trove
launch workflow. Trove then would support being able to use
containers, users could potentially pack more containers onto their
vm's then just Trove, and it still would work with both Bare Metal
and VM's the same way since Magnum can launch on either. I'm afraid
supporting both containers and non container deployment with Trove
will be a large effort with very little code sharing. It may be
easiest to have a flag version where non container deployments are
upgraded to containers then non container support is dropped. > Sure
trove seems like it would be a consumer of whatever interprets that
format, just like many other consumers could be (with the special
case that 

Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Fox, Kevin M
Ops are asking for you to make it easy for them to make their security weak. 
And as a user of other folks clouds, i'd have no way to know the cloud is in 
that mode. That seems really bad for app developers/users.

Barbican, like some of the other servises, wont become common if folks keep 
trying to reimplement it so they dont have to depend on it. Folks said the same 
things about Keystone. Ultimately it was worth making it a dependency.

Keystone doesnt support encryption, so you are asking for new functionality 
duplicating Barbican either way.

And we do understand the point of what you are trying to do. We just dont see 
eye to eye on it being a good thing to do. If you are invested enough in 
setting up an ha setup where you would need a clusterd solution, barbicans not 
that much of an extra lift compared to the other services you've already had to 
deploy. Ive deployed both ha setups and barbican before. Ha is way worse.

Thanks,
Kevin



From: Adrian Otto
Sent: Tuesday, April 12, 2016 8:06:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates

Please don't miss the point here. We are seeking a solution that allows a 
location to place a client side encrypted blob of data (A TLS cert) that 
multiple magnum-conductor processes on different hosts can reach over the 
network.

We *already* support using Barbican for this purpose, as well as storage in 
flat files (not as secure as Barbican, and only works with a single conductor) 
and are seeking a second alternative for clouds that have not yet adopted 
Barbican, and want to use multiple conductors. Once Barbican is common in 
OpenStack clouds, both alternatives are redundant and can be deprecated. If 
Keystone depends on Barbican, then we have no reason to keep using it. That 
will mean that Barbican is core to OpenStack.

Our alternative to using Keystone is storing the encrypted blobs in the Magnum 
database which would cause us to add an API feature in magnum that is the exact 
functional equivalent of the credential store in Keystone. That is something we 
are trying to avoid by leveraging existing OpenStack APIs.

--
Adrian

On Apr 12, 2016, at 3:44 PM, Dolph Mathews 
> wrote:


On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad 
> wrote:
Keystone's credential API pre-dates barbican. We started talking about having 
the credential API back to barbican after it was a thing. I'm not sure if any 
work has been done to move the credential API in this direction. From a 
security perspective, I think it would make sense for keystone to back to 
barbican.

+1

And regarding the "inappropriate use of keystone," I'd agree... without this 
spec, keystone is entirely useless as any sort of alternative to Barbican:

  https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu 
> wrote:
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particular, before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for storing 
TLS certificates. Currently, we leverage Barbican for this purpose, but we 
constantly received requests to decouple Magnum from Barbican (because users 
normally don’t have Barbican installed in their clouds). Some Magnum team 
members proposed to leverage Keystone credential store as a Barbican 
alternative [1]. Therefore, I want to confirm what is Keystone team position 
for this proposal (I remembered someone from Keystone mentioned this is an 
inappropriate use of Keystone. Would I ask for further clarification?). Thanks 
in advance.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Morgan Fainberg
On Tue, Apr 12, 2016 at 8:06 PM, Adrian Otto 
wrote:

> Please don't miss the point here. We are seeking a solution that allows a
> location to place a client side encrypted blob of data (A TLS cert) that
> multiple magnum-conductor processes on different hosts can reach over the
> network.
>
> We *already* support using Barbican for this purpose, as well as storage
> in flat files (not as secure as Barbican, and only works with a single
> conductor) and are seeking a second alternative for clouds that have not
> yet adopted Barbican, and want to use multiple conductors. Once Barbican is
> common in OpenStack clouds, both alternatives are redundant and can be
> deprecated. If Keystone depends on Barbican, then we have no reason to keep
> using it. That will mean that Barbican is core to OpenStack.
>
> Our alternative to using Keystone is storing the encrypted blobs in the
> Magnum database which would cause us to add an API feature in magnum that
> is the exact functional equivalent of the credential store in Keystone.
> That is something we are trying to avoid by leveraging existing OpenStack
> APIs.
>
>
Is it really unreasonable to make Magnum depend on Barbican? I know I
discussed this with you previously, but I would like to know how much
pushback you're really seeing on saying "Barbican is important for these
security reasons in a scaled-up environment and here is why we made this
choice to depend on it". Secure by default is far better than an option
that is significantly sub-optimal.

So, is Barbican support really hampering Magnum in significant ways? If so,
what can we do to improve the story to make Barbican compelling instead of
needing this alternative?

+1 to Dolph's comment on Barbican being more mature *and* another +1 for
the comment that credentials being un-encrypted in keystone makes storing
secure credentials in keystone significantly less desirable.

These questions are intended to just fill in some blanks I am seeing so we
have a complete story and can look at prioritizing work/specs/etc.


> --
> Adrian
>
> On Apr 12, 2016, at 3:44 PM, Dolph Mathews 
> wrote:
>
>
> On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad 
> wrote:
>
>> Keystone's credential API pre-dates barbican. We started talking about
>> having the credential API back to barbican after it was a thing. I'm not
>> sure if any work has been done to move the credential API in this
>> direction. From a security perspective, I think it would make sense for
>> keystone to back to barbican.
>>
>
> +1
>
> And regarding the "inappropriate use of keystone," I'd agree... without
> this spec, keystone is entirely useless as any sort of alternative to
> Barbican:
>
>   https://review.openstack.org/#/c/284950/
>
> I suspect Barbican will forever be a much more mature choice for Magnum.
>
>
>>
>> On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu 
>> wrote:
>>
>>> Hi all,
>>>
>>>
>>>
>>> In short, some Magnum team members proposed to store TLS certificates in
>>> Keystone credential store. As Magnum PTL, I want to get agreements (or
>>> non-disagreement) from OpenStack community in general, Keystone community
>>> in particular, before approving the direction.
>>>
>>>
>>>
>>> In details, Magnum leverages TLS to secure the API endpoint of
>>> kubernetes/docker swarm. The usage of TLS requires a secure store for
>>> storing TLS certificates. Currently, we leverage Barbican for this purpose,
>>> but we constantly received requests to decouple Magnum from Barbican
>>> (because users normally don’t have Barbican installed in their clouds).
>>> Some Magnum team members proposed to leverage Keystone credential store as
>>> a Barbican alternative [1]. Therefore, I want to confirm what is Keystone
>>> team position for this proposal (I remembered someone from Keystone
>>> mentioned this is an inappropriate use of Keystone. Would I ask for further
>>> clarification?). Thanks in advance.
>>>
>>>
>>>
>>> [1]
>>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>>>
>>>
>>>
>>> Best regards,
>>>
>>> Hongbin
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Peng Zhao
I'd take the idea further. Imagine a typical Heat template, what you need to do
is:
- replace the VM id with Docker image id - nothing else - run the script with a 
normal heat engine - the entire stack gets deployed in seconds
Done!
Well, that sounds like nova-docker. What about cinder and neutron? They don't
work well with Linux container! The answer is Hypernova
(https://github.com/hyperhq/hypernova) or Intel ClearContainer, seamless
integration with most OpenStack components.
Summary: minimal changes to interface and upper systems, much smaller image and
much better developer workflow.
Peng
- Hyper_ Secure Container 
Cloud


On Wed, Apr 13, 2016 5:23 AM, Joshua Harlow harlo...@fastmail.com wrote:
Fox, Kevin M wrote: > I think part of the problem is containers are mostly
orthogonal to vms/bare metal. Containers are a package for a single service.
Multiple can run on a single vm/bare metal host. Orchestration like Kubernetes
comes in to turn a pool of vm's/bare metal into a system that can easily run
multiple containers. > Is the orthogonal part a problem because we have made it
so or is it just how it really is? Brainstorming starts here: Imagine a
descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified): --- components: - label:
frontend count: 5 image: ubuntu_vanilla requirements: high memory, low disk
stateless: true - label: database count: 3 image: ubuntu_vanilla requirements:
high memory, high disk stateless: false - label: memcache count: 3 image:
debian-squeeze requirements: high memory, no disk stateless: true - label:
zookeeper count: 3 image: debian-squeeze requirements: high memory, medium disk
stateless: false backend: VM networks: - label: frontend_net flavor: "public
network instead it's just about the constraints that a user has on there
deployment and the components associated with it. It can be left up to some
consuming project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an actual
deployment plan), possibly say by optimizing for density (packing as many things
container) or optimizing for security (by using VMs) or optimizing for
performance (by using bare-metal). > So, rather then concern itself with
supporting launching through a COE and through Nova, which are two totally
different code paths, OpenStack advanced services like Trove could just use a
Magnum COE and have a UI that asks which existing Magnum COE to launch in, or
alternately kick off the "Launch new Magnum COE" workflow in horizon, then
follow up with the Trove launch workflow. Trove then would support being able to
use containers, users could potentially pack more containers onto their vm's
then just Trove, and it still would work with both Bare Metal and VM's the same
way since Magnum can launch on either. I'm afraid supporting both containers and
non container deployment with Trove will be a large effort with very little code
sharing. It may be easiest to have a flag version where non container
deployments are upgraded to containers then non container support is dropped. >
Sure trove seems like it would be a consumer of whatever interprets that format,
just like many other consumers could be (with the special case that trove
creates such a format on-behalf of some other consumer, aka the trove user). >
As for the app-catalog use case, the app-catalog project
(http://apps.openstack.org) is working on some of that. > > Thanks, > Kevin >
 > From: Joshua Harlow
[harlo...@fastmail.com] > Sent: Tuesday, April 12, 2016 12:16 PM  OpenStack
Development Mailing List (not for usage questions) > Cc:
foundat...@lists.openstack.org > Subject: Re: [openstack-dev] [OpenStack
Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of
Directors Meeting) > > Flavio Percoco wrote: >> On 11/04/16 18:05 +, Amrith
Kumar wrote: >>> Adrian, thx for your detailed mail. >>> >>> >>> >>> Yes, I was
hopeful of a silver bullet and as we’ve discussed before (I >>> think it >>> was
Vancouver), there’s likely no silver bullet in this area. After that >>>
conversation, and some further experimentation, I found that even if >>> Trove
had >>> access to a single Compute API, there were other significant >>>
complications >>> further down the road, and I didn’t pursue the project further
at the >>> time. >>> >> Adrian, Amrith, >> >> I've spent enough time researching
on this area during the last month >> and my >> conclusion is pretty much the
above. There's no silver bullet in this >> area and >> I'd argue there shouldn't
be one. Containers, bare metal and VMs differ >> in such >> a way (feature-wise)
that it'd not be good, as far as deploying >> databases goes, >> for there to be
one compute API. Containers allow for a different >> deployment >> architecture
than VMs and so does bare metal. > > Just some 

Re: [openstack-dev] [tricircle] Hello

2016-04-12 Thread joehuang
Hi, Lige,

What’s your email address in https://review.openstack.org? Not able to add you 
as the reviewer.

Now the cross pod L2 networking BP is in review. 
https://review.openstack.org/#/c/304540/, you can add yourself as reviewer and 
comment on it. It’s quite important for your comments from financial IT 
segments.

The discussion about the cross pod L2 networking is in : 
https://etherpad.openstack.org/p/TricircleCrossPodL2Networking

Best Regards
Chaoyi Huang ( Joe Huang )

From: 李戈 [mailto:lgmcglm...@126.com]
Sent: Tuesday, April 05, 2016 11:41 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [tricircle] Hello

Hello Team,
  I am lige, a openstack coder in China UnionPay and glad to join our team.

   thx.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Apr. 13

2016-04-12 Thread joehuang
Hi,

We have a very good discussion about cross OpenStack L2 networking  
https://etherpad.openstack.org/p/TricircleCrossPodL2Networking and Dynamic Pod 
Binding https://etherpad.openstack.org/p/TricircleDynamicPodBinding. This week 
we will discuss what's missing in first release of Tricircle for Mitaka.

IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.

Agenda:
# Reliable aync. Job - items left
# tempest test
# Tricircle Mitaka Release
# Link: https://etherpad.openstack.org/p/TricircleToDo

If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Adrian Otto
Please don't miss the point here. We are seeking a solution that allows a 
location to place a client side encrypted blob of data (A TLS cert) that 
multiple magnum-conductor processes on different hosts can reach over the 
network.

We *already* support using Barbican for this purpose, as well as storage in 
flat files (not as secure as Barbican, and only works with a single conductor) 
and are seeking a second alternative for clouds that have not yet adopted 
Barbican, and want to use multiple conductors. Once Barbican is common in 
OpenStack clouds, both alternatives are redundant and can be deprecated. If 
Keystone depends on Barbican, then we have no reason to keep using it. That 
will mean that Barbican is core to OpenStack.

Our alternative to using Keystone is storing the encrypted blobs in the Magnum 
database which would cause us to add an API feature in magnum that is the exact 
functional equivalent of the credential store in Keystone. That is something we 
are trying to avoid by leveraging existing OpenStack APIs.

--
Adrian

On Apr 12, 2016, at 3:44 PM, Dolph Mathews 
> wrote:


On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad 
> wrote:
Keystone's credential API pre-dates barbican. We started talking about having 
the credential API back to barbican after it was a thing. I'm not sure if any 
work has been done to move the credential API in this direction. From a 
security perspective, I think it would make sense for keystone to back to 
barbican.

+1

And regarding the "inappropriate use of keystone," I'd agree... without this 
spec, keystone is entirely useless as any sort of alternative to Barbican:

  https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu 
> wrote:
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particular, before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for storing 
TLS certificates. Currently, we leverage Barbican for this purpose, but we 
constantly received requests to decouple Magnum from Barbican (because users 
normally don't have Barbican installed in their clouds). Some Magnum team 
members proposed to leverage Keystone credential store as a Barbican 
alternative [1]. Therefore, I want to confirm what is Keystone team position 
for this proposal (I remembered someone from Keystone mentioned this is an 
inappropriate use of Keystone. Would I ask for further clarification?). Thanks 
in advance.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Devdatta Kulkarni
Hi,

Reading through the thread I thought of adding following points which   might 
be relevant to this discussion.

1) Related to the point of providing an abstraction for deploying apps to 
different form factors: 
OpenStack Solum is a project which allows deploying of applications starting 
from
their source code. Currently we support following deployment options:
(a) deploying app containers in a setup where nova is configured to use 
nova-docker driver
(b) deploying app containers on a VM with one container per VM configuration
(c) deploying app containers to a COE such as a Docker swarm cluster (currently 
under development)

For (a) and (b) we use parameterized Heat templates. For (c), we currently 
assume that
a COE cluster is already created. Solum has a feature whereby app   
developers can provide
different kinds of parameters with theirapp deployment request.
This feature is used to provide cluster creds with the app deployment request.

The deployment option is controlled by the operator at the time
of Solum deployment. Solum's architecture is such that it is straightforward
to add new deployers. I haven't looked at Ironic so won't be able to comment 
how difficult/easy
it would be to add a Ironic deployer in Solum. As Joshua mentioned, it will be 
interesting
to consider different constraints (density, performance, isolation) when 
choosing the
form factor to deploy app containers. Of course, it will also depend on how 
other
OpenStack services are configured within that particular OpenStack setup.


2) Related to the point of high-level application description:
In Solum, we use a simple yaml format for describing an app to Solum. Example 
of an app file can be found here:
https://github.com/openstack/solum/blob/master/examples/apps/python_app.yaml

We have been planning to work on multi-container/microservice-based apps next, 
and have a spec for that:
https://review.openstack.org/#/c/254729/10/specs/mitaka/micro-service-architecture.rst

Any comments/feedback on the spec is welcome.


Lastly, in case you want to try out Solum, here is a link to setting up a 
development environment:
https://wiki.openstack.org/wiki/Solum/solum-development-setup

and getting started guide:
http://docs.openstack.org/developer/solum/getting_started/

Regards,
Devdatta


From: Joshua Harlow 
Sent: Tuesday, April 12, 2016 2:16 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Flavio Percoco wrote:
> On 11/04/16 18:05 +, Amrith Kumar wrote:
>> Adrian, thx for your detailed mail.
>>
>>
>>
>> Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
>> think it
>> was Vancouver), there’s likely no silver bullet in this area. After that
>> conversation, and some further experimentation, I found that even if
>> Trove had
>> access to a single Compute API, there were other significant
>> complications
>> further down the road, and I didn’t pursue the project further at the
>> time.
>>
>
> Adrian, Amrith,
>
> I've spent enough time researching on this area during the last month
> and my
> conclusion is pretty much the above. There's no silver bullet in this
> area and
> I'd argue there shouldn't be one. Containers, bare metal and VMs differ
> in such
> a way (feature-wise) that it'd not be good, as far as deploying
> databases goes,
> for there to be one compute API. Containers allow for a different
> deployment
> architecture than VMs and so does bare metal.

Just some thoughts from me, but why focus on the
compute/container/baremetal API at all?

I'd almost like a way that just describes how my app should be
interconnected, what is required to get it going, and the features
and/or scheduling requirements for different parts of those app.

To me it feels like this isn't a compute API or really a heat API but
something else. Maybe it's closer to the docker compose API/template
format or something like it.

Perhaps such a thing needs a new project. I'm not sure, but it does feel
like that as developers we should be able to make such a thing that
still exposes the more advanced functionality of the underlying API so
that it can be used if really needed...

Maybe this is similar to an app-catalog, but that doesn't quite feel
like it's the right thing either so maybe somewhere in between...

IMHO I'd be nice to have a unified story around what this thing is, so
that we as a community can drive (as a single group) toward that, maybe
this is where the product working group can help and we as a developer
community can also try to unify behind...

P.S. name for project should be 'silver' related, ha.

-Josh

__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Amrith Kumar
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Tuesday, April 12, 2016 8:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Cc: foundat...@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
> Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
> 
> On 11/04/16 18:05 +, Amrith Kumar wrote:
> >Adrian, thx for your detailed mail.
> >
> >
> >
> >Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
> >think it was Vancouver), there’s likely no silver bullet in this area.
> >After that conversation, and some further experimentation, I found that
> >even if Trove had access to a single Compute API, there were other
> >significant complications further down the road, and I didn’t pursue the
> project further at the time.
> >
> 
> Adrian, Amrith,
> 
> I've spent enough time researching on this area during the last month and
> my conclusion is pretty much the above. There's no silver bullet in this
> area and I'd argue there shouldn't be one. Containers, bare metal and VMs
> differ in such a way (feature-wise) that it'd not be good, as far as
> deploying databases goes, for there to be one compute API. Containers
> allow for a different deployment architecture than VMs and so does bare
> metal.
> 

[amrith] That is an interesting observation if we were developing a unified 
compute service. However, the issue for a project like Trove is not whether 
Containers, VM's and bare-metal are the same of different, but rather what a 
user is looking to get out of a deployment of a database in each of those 
compute environments.

> 
> >We will be discussing Trove and Containers in Austin [1] and I’ll try
> >and close the loop with you on this while we’re in Town. I still would
> >like to come up with some way in which we can offer users the option of
> >provisioning database as containers.
> 
> As the person leading this session, I'm also looking forward to providing
> such provisioning facilities to Trove users. Let's do this.
> 

[amrith] In addition to hearing about how you plan to solve the problem, I 
would like to know what problem it is that you are planning to solve. Putting a 
database in a container is a solution, not a problem (IMHO). But the scope of 
this thread is broader so I'll stop at that.

> Cheers,
> Flavio
> 
> >
> >Thanks,
> >
> >
> >
> >-amrith
> >
> >
> >
> >[1] https://etherpad.openstack.org/p/trove-newton-summit-container
> >
> >
> >
> >From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> >Sent: Monday, April 11, 2016 12:54 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >
> >Cc: foundat...@lists.openstack.org
> >Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
> >One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
> >
> >
> >
> >Amrith,
> >
> >
> >
> >I respect your point of view, and agree that the idea of a common
> >compute API is attractive… until you think a bit deeper about what that
> >would mean. We seriously considered a “global” compute API at the time
> >we were first contemplating Magnum. However, what we came to learn
> >through the journey of understanding the details of how such a thing
> >would be implemented, that such an API would either be (1) the lowest
> >common denominator (LCD) of all compute types, or (2) an exceedingly
> complex interface.
> >
> >
> >
> >You expressed a sentiment below that trying to offer choices for VM,
> >Bare Metal (BM), and Containers for Trove instances “adds considerable
> complexity”.
> >Roughly the same complexity would accompany the use of a comprehensive
> >compute API. I suppose you were imagining an LCD approach. If that’s
> >what you want, just use the existing Nova API, and load different
> >compute drivers on different host aggregates. A single Nova client can
> >produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with
> >a common API (Nova) if it’s configured in this way. That’s what we do.
> >Flavors determine which compute type you get.
> >
> >
> >
> >If what you meant is that you could tap into the power of all the
> >unique characteristics of each of the various compute types (through
> >some modular extensibility framework) you’ll likely end up with
> >complexity in Trove that is comparable to integrating with the native
> >upstream APIs, along with the disadvantage of waiting for OpenStack to
> >continually catch up to the pace of change of the various upstream
> >systems on which it depends. This is a recipe for disappointment.
> >
> >
> >
> >We concluded that wrapping native APIs is a mistake, particularly when
> >they are sufficiently different than what the Nova API already offers.
> >Containers APIs have limited similarities, so when you try to make a
> >universal interface to all of them, you end up with a really
> >complicated mess. 

Re: [openstack-dev] [Congress] Guide for all reactive policy options? (execute[...])

2016-04-12 Thread Bryan Sullivan
Thanks for the hint, Masahito. A very useful command to know. It would be good 
to give a hint about this in the intro to the section "3. Manual Reactive 
Enforcement" 
(https://github.com/openstack/congress/blob/master/doc/source/enforcement.rst). 
That's where I was looking for a summary on how to know whats supported. e.g.

"You can see the supported actions for each Congress datasource driver with 
'openstack congress datasource actions show '"

With your suggestion I was able to get a test running for a "reserved subnet 
error": https://git.opnfv.org/cgit/copper/tree/tests/adhoc/reserved_subnet.sh

Thanks,
Bryan Sullivan

--
Date: Tue Apr 12 05:49:18 UTC 2016
From: Masahito MUROI 
muroi.masahito at lab.ntt.co.jp
   


Hi Bryan,

You can see neutron driver's action with 'openstack congress datasource 
actions show' command.  It shows all execution method supported by 
neutronclient.

btw, the prefix of reaction policy rule is datasource *name*. If you 
initialize openstack like devstack, the datasource name for neutron is 
not neutron but neutronv2.

best regard,
Masahito
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Issues with Tox testing

2016-04-12 Thread GHANSHYAM MANN
Yea that's right. Tempest will be able to run all tests(tempest one or
plugin) as long as plugin is discoverable and all services tests
interact with are reachable from Tempest installed node.

In that case, you need to configure services (keystone, congress etc)
endpoints correctly. Please refer configuration in detail-
http://docs.openstack.org/developer/tempest/configuration.html

Regards
Ghanshyam Mann


On Tue, Apr 12, 2016 at 2:46 PM, Anusha Ramineni  wrote:
> Hi Bryan,
>
> Yes, tempest can be run outside devstack deployments. Please check the
> README in https://github.com/openstack/tempest on configuring tempest.
>
> As in liberty, you need to copy the tests to tempest, I guess installing
> tempest on diff server also should work as long as congress service is
> discoverable (never tried though) . But just to let you know, congress
> Liberty version has minimal tempest coverage, In Mitaka we have enabled all
> the tempest tests.
>
> Best Regards,
> Anusha
>
> On 12 April 2016 at 10:43, Bryan Sullivan  wrote:
>>
>> Hi Anusha,
>>
>> That helps. Just one more question: in Liberty (which I'm currently based
>> upon) have the tempest tests been run outside of devstack deployments, i.e.
>> in an actual OpenStack deployment? The guide you reference mentions devstack
>> but it's not clear that the same process applies outside devstack:
>>
>> e.g. "To list all Congress test cases, run command in /opt/stack/tempest:"
>> references the "/opt/stack" folder which is not created outside of devstack
>> environments. Thus to run them in a full OpenStack deployment, do I need to
>> install  tempest and create an "opt/stack/tempest" folder to which the tests
>> are copied, on the same server where Congress is installed?
>>
>> I'll try Mitaka soon but I expect to have the same question there:
>> basically, are the tempest tests expected to be usable outside a devstack
>> deploy?
>>
>> I guess I could just try it, but I don't want to waste time if this is not
>> designed to be used outside devstack environments.
>>
>> Thanks,
>> Bryan Sullivan
>>
>> 
>> Date: Fri, 8 Apr 2016 09:01:29 +0530
>> From: anusha.ii...@gmail.com
>>
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Congress] Issues with Tox testing
>>
>> Hi Bryan,
>>
>> tox -epy27 doesn't run tempest tests , that is tests mentioned in
>> https://github.com/openstack/congress/tree/stable/liberty/contrib/tempest ,
>> it runs only unit tests , tests present in
>> https://github.com/openstack/congress/tree/stable/liberty/congress/tests .
>>
>> To run tempest tests, you need to manually copy the files to tempest and
>> run the tests as mentioned in following readme
>> https://github.com/openstack/congress/blob/stable/liberty/contrib/tempest/README.rst
>>
>> Mitaka supports tempest plugin, so manually copying tests to tempest can
>> be avoided if you are using mitaka.
>>
>> Hope I clarified your question.
>>
>>
>> Best Regards,
>> Anusha
>>
>> On 8 April 2016 at 08:51, Bryan Sullivan  wrote:
>>
>> OK, somehow I did not pick up on that, or dropped it along the way of
>> developing the script. Thanks for the clarification, also that Tempest is
>> not required. I should have clarified that I'm using stable/liberty as the
>> base. I will be moving to stable/mitaka soon, as part of the OPNFV Colorado
>> release development.
>>
>> One additional question then - are the tests run by "tox -epy27" the same
>> as the tests in the folder
>> https://github.com/openstack/congress/tree/stable/liberty/contrib/tempest?
>> If not, how are those tests supposed to be run for a non-devstack deploy (I
>> see reference to devstack in the readme)?
>>
>> I see that the folders have been reorganized for mitaka. My question is
>> per the goal to include as much of the Congress tests as possible in the
>> OPNFV CI/CD process. Not that I expect any to fail, I just want OPNFV to
>> leverage the full test suite. If for liberty that's best left as the tests
>> run by the tox command, then that's OK.
>>
>> Thanks,
>> Bryan Sullivan
>>
>> 
>> Date: Thu, 7 Apr 2016 17:11:36 -0700
>> From: ekcs.openst...@gmail.com
>> To: openstack-dev@lists.openstack.org
>>
>> Subject: Re: [openstack-dev] [Congress] Issues with Tox testing
>>
>> Thanks for the feedback, Bryan. Glad you got things working!
>>
>> 1. The instructions asking to install those packages are missing from kilo
>> (we’ll fix that), but they have been there since liberty. Was it perhaps
>> unclear because the line is too long?
>>
>> Additionally:
>>
>> $ sudo apt-get install git gcc python-dev libxml2 libxslt1-dev libzip-dev
>> mysql-server python-mysqldb build-essential libssl-dev libffi-dev
>>
>> 2. Tempest should not be required by the tox tests.
>>
>> Thanks!
>>
>> From: Bryan Sullivan 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 

Re: [openstack-dev] [Congress] Issues with Tox testing

2016-04-12 Thread Bryan Sullivan
Thanks for the clarification, Anusha. Sounds like I should focus my effort on 
using tempest tests in Mitaka.
Thanks,
Bryan Sullivan


  

Date: Tue Apr 12 05:46:58 UTC 2016
From: anusha.iiitm at gmail.com

Hi Bryan,

Yes, tempest can be run outside devstack deployments. Please check the
README in https://github.com/openstack/tempest on configuring tempest.

As in liberty, you need to copy the tests to tempest, I guess installing
tempest on diff server also should work as long as congress service is
discoverable (never tried though) . But just to let you know, congress
Liberty version has minimal tempest coverage, In Mitaka we have enabled all
the tempest tests.

Best Regards,
Anusha

  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notification work continues in Newton

2016-04-12 Thread Zhenyu Zheng
Great, glad to help.

On Tue, Apr 12, 2016 at 9:03 PM, Balázs Gibizer  wrote:

> Hi,
>
> I just want to let the community know that the versioned notification
> work we started in Mitaka is planned to be continued in Newton.
> The planned goals for Newton:
> * Transform the most important notification to the new format [1]
> * Help others to use the new framework adding new notifications [2], [3],
> [4]
> * Further help notification consumers by adding json schemas for the
> notifications [5]
>
> I will be in Austin during the whole summit week to discuss these ideas.
> I will restart the notification subteam meeting [6] after the summit to
> have
> a weekly synchronization point. All this work is followed up on the
> etherpad [7].
>
> Cheers,
> Gibi
>
> [1]
> https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-newton
> [2]
> https://blueprints.launchpad.net/nova/+spec/add-swap-volume-notifications
> [3]
> https://blueprints.launchpad.net/nova/+spec/expose-quiesce-unquiesce-api
> [4] https://blueprints.launchpad.net/nova/+spec/hypervisor-notification
> [5]
> https://blueprints.launchpad.net/nova/+spec/json-schema-for-notifications
> [6] https://wiki.openstack.org/wiki/Meetings/NovaNotification
> [7] https://etherpad.openstack.org/p/nova-versioned-notifications
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-12 Thread Vikram Hosakote (vhosakot)
I value the friendliness and approachability that contributors seek.

I am totally fine with minor mistakes in documentation.

As they make more changes, they will understand/follow the process.

+1


Regards,
Vikram Hosakote
IRC: vhosakot

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, April 11, 2016 at 3:37 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla][vote] Nit-picking documentation changes

Hey folks,

The reviewers in Kolla tend to nit-pick the quickstart guide to death during 
reviews.  I'd like to keep that high bar in place for the QSG, because it is 
our most important piece of documentation at present.  However, when new 
contributors see the nitpicking going on in reviews, I think they may get 
discouraged about writing documentation for other parts of Kolla.

I'd prefer if the core reviewers held a lower bar for docs not related to the 
philosophy or quiickstart guide document.  We can always iterate on these new 
documents (like the operator guide) to improve them and raise the bar on their 
quality over time, as we have done with the quickstart guide.  That way 
contributors don't feel nitpicked to death and avoid improving the 
documentation.

If you are a core reveiwer and agree with this approach please +1, if not 
please -1.

This is an unofficial vote and carries no commitment from the core reviewers, 
but I've included vote in the subject just to catch people's attention and to 
get folks thoughts on this matter.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Steve Baker wrote:

On 13/04/16 11:07, Joshua Harlow wrote:

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what
needs to connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat& docker COE& neutron sfc to
produce a final set of deployment scripts and then just runs it
through the meat grinder. :)

It would be awesome to use. It may be very difficult to implement.


+1 it will not be easy.

Although the complexity of it is probably no different than what a SQL
engine has to do to parse a SQL statement into a actionable plan, just
in this case it's a application deployment 'statement' and the
realization of that plan are of course where the 'meat' of the program
is.

It would be nice to connect what neutron SFC stuff is being worked
on/exists and have a single project for this kind of stuff, but maybe
I am dreaming to much right now :-P



This sounds a lot like what the TOSCA spec[1] is aiming to achieve. I
could imagine heat-translator[2] gaining the ability to translate TOSCA
templates to either nova or COE specific heat templates which are then
created as stacks.

[1]
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html


Since I don't know, does anyone in the wider-world actually support 
TOSCA as there API? Or is TOSCA more of an exploratory kind of thing or 
something else (seems like there is TOSCA -> Heat?)? Forgive my lack of 
understanding ;)




[2] https://github.com/openstack/heat-translator



If you ignore the non container use case, I think it might be fairly
easily mappable to all three COE's though.

Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Fox, Kevin M wrote:

I think part of the problem is containers are mostly orthogonal to
vms/bare metal. Containers are a package for a single service.
Multiple can run on a single vm/bare metal host. Orchestration like
Kubernetes comes in to turn a pool of vm's/bare metal into a system
that can easily run multiple containers.



Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

---
components:
- label: frontend
count: 5
image: ubuntu_vanilla
requirements: high memory, low disk
stateless: true
- label: database
count: 3
image: ubuntu_vanilla
requirements: high memory, high disk
stateless: false
- label: memcache
count: 3
image: debian-squeeze
requirements: high memory, no disk
stateless: true
- label: zookeeper
count: 3
image: debian-squeeze
requirements: high memory, medium disk
stateless: false
backend: VM
networks:
- label: frontend_net
flavor: "public network"
associated_with:
- frontend
- label: database_net
flavor: high bandwidth
associated_with:
- database
- label: backend_net
flavor: high bandwidth and low latency
associated_with:
- zookeeper
- memchache
constraints:
- ref: container_only
params:
- frontend
- ref: no_colocated
params:
- database
- frontend
- ref: spread
params:
- database
- ref: no_colocated
params:
- database
- frontend
- ref: spread
params:
- memcache
- ref: spread
params:
- zookeeper
- ref: isolated_network
params:
- frontend_net
- database_net
- backend_net
...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).


So, rather then concern itself with supporting launching through a
COE and through Nova, which are two totally different code paths,
OpenStack advanced services like Trove could just use a Magnum COE
and have a UI that asks which existing Magnum COE to launch in, or
alternately kick off the "Launch new Magnum COE" workflow in
horizon, then follow up with the Trove launch workflow. Trove then
would support being able to use containers, users could potentially
pack more containers onto their vm's then just Trove, and it still
would work with both Bare Metal and VM's the same way since Magnum
can launch on either. 

Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Anita Kuno
On 04/12/2016 07:21 PM, Jay S. Bryant wrote:
> Sean,
> 
> Thanks for getting the discussion started.
> 
> I think R-14 might be a little tricky leading up to the 4th of July
> weekend.
> 
> It seems like R-17 is coming awful quick but could be done.  That or
> R-11 would be my second vote then.

R-11 is one of the weeks that Nova is considering, just to keep that in
mind.

Thanks Jay, Rochester is a nice location.

Thanks,
Anita.

> 
> We have had successful meet-ups in Rochester ... yes, the guest wifi
> works in Rochester ... I could check into using our facilities if anyone
> was interested.
> 
> Jay
> 
> On 04/12/2016 09:05 AM, Sean McGinnis wrote:
>> Hey Cinder team (and those interested),
>>
>> We've had a few informal conversations on the channel and in meetings,
>> but wanted to capture some things here and spread awareness.
>>
>> I think it would be good to start planning for our Newton midcycle.
>> These have been incredibly productive in the past (at least in my
>> opinion) so I'd like to get it on the schedule so folks can start
>> planning for it.
>>
>> For Mitaka we held our midcycle in the R-10 week. That seemed to work
>> out pretty well, but I also think it might be useful to hold it a little
>> earlier in the cycle to keep some momentum going and make sure things
>> stay pretty focused for the rest of the cycle.
>>
>> For reference, here is the current release schedule for Newton:
>>
>> http://releases.openstack.org/newton/schedule.html
>>
>> R-10 puts us in the last week of July.
>>
>> I would have a conflict R-16, R-15. We probably want to avoid US
>> Independence Day R-13, and milestone weeks R-18 and R12.
>>
>> So potential weeks look like:
>>
>> * R-17
>> * R-14
>> * R-11
>> * R-10
>> * R-9
>>
>> Nova is in the process of figuring out their date. If we have that, it
>> would be good to try to avoid an overlap there. Our linked midcycle
>> session worked out well, but probably better if they don't conflict.
>>
>> We also need to work out locations. Anyone able and willing to host,
>> just let me know. We need a facility with wifi, able to hold ~30-40
>> people, wifi, close to an airport. And wifi.
>>
>> At some point I still think it would be nice for our international folks
>> to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
>> Collins our somewhere similar.
>>
>> Thanks!
>>
>> Sean (smcginnis)
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Steve Baker

On 13/04/16 11:07, Joshua Harlow wrote:

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what 
needs to connect to what, and it figures out the plumbing.


Ideally, it would map somehow to heat&  docker COE& neutron sfc to 
produce a final set of deployment scripts and then just runs it 
through the meat grinder. :)


It would be awesome to use. It may be very difficult to implement.


+1 it will not be easy.

Although the complexity of it is probably no different than what a SQL 
engine has to do to parse a SQL statement into a actionable plan, just 
in this case it's a application deployment 'statement' and the 
realization of that plan are of course where the 'meat' of the program 
is.


It would be nice to connect what neutron SFC stuff is being worked 
on/exists and have a single project for this kind of stuff, but maybe 
I am dreaming to much right now :-P




This sounds a lot like what the TOSCA spec[1] is aiming to achieve. I 
could imagine heat-translator[2] gaining the ability to translate TOSCA 
templates to either nova or COE specific heat templates which are then 
created as stacks.


[1] 
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html

[2] https://github.com/openstack/heat-translator



If you ignore the non container use case, I think it might be fairly 
easily mappable to all three COE's though.


Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] 
One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)


Fox, Kevin M wrote:
I think part of the problem is containers are mostly orthogonal to 
vms/bare metal. Containers are a package for a single service. 
Multiple can run on a single vm/bare metal host. Orchestration like 
Kubernetes comes in to turn a pool of vm's/bare metal into a system 
that can easily run multiple containers.




Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

  ---
  components:
  -   label: frontend
  count: 5
  image: ubuntu_vanilla
  requirements: high memory, low disk
  stateless: true
  -   label: database
  count: 3
  image: ubuntu_vanilla
  requirements: high memory, high disk
  stateless: false
  -   label: memcache
  count: 3
  image: debian-squeeze
  requirements: high memory, no disk
  stateless: true
  -   label: zookeeper
  count: 3
  image: debian-squeeze
  requirements: high memory, medium disk
  stateless: false
  backend: VM
  networks:
  -   label: frontend_net
  flavor: "public network"
  associated_with:
  - frontend
  -   label: database_net
  flavor: high bandwidth
  associated_with:
  - database
  -   label: backend_net
  flavor: high bandwidth and low latency
  associated_with:
  - zookeeper
  - memchache
  constraints:
  -   ref: container_only
  params:
  - frontend
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   database
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   memcache
  -   ref: spread
  params:
  -   zookeeper
  -   ref: isolated_network
  params:
  - frontend_net
  - database_net
  - backend_net
  ...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).

So, rather then concern itself with supporting launching through a 
COE and through Nova, which are two totally different code paths, 
OpenStack advanced services like Trove could just use a Magnum COE 
and have a UI 

Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Jay S. Bryant

Sean,

Thanks for getting the discussion started.

I think R-14 might be a little tricky leading up to the 4th of July weekend.

It seems like R-17 is coming awful quick but could be done.  That or 
R-11 would be my second vote then.


We have had successful meet-ups in Rochester ... yes, the guest wifi 
works in Rochester ... I could check into using our facilities if anyone 
was interested.


Jay

On 04/12/2016 09:05 AM, Sean McGinnis wrote:

Hey Cinder team (and those interested),

We've had a few informal conversations on the channel and in meetings,
but wanted to capture some things here and spread awareness.

I think it would be good to start planning for our Newton midcycle.
These have been incredibly productive in the past (at least in my
opinion) so I'd like to get it on the schedule so folks can start
planning for it.

For Mitaka we held our midcycle in the R-10 week. That seemed to work
out pretty well, but I also think it might be useful to hold it a little
earlier in the cycle to keep some momentum going and make sure things
stay pretty focused for the rest of the cycle.

For reference, here is the current release schedule for Newton:

http://releases.openstack.org/newton/schedule.html

R-10 puts us in the last week of July.

I would have a conflict R-16, R-15. We probably want to avoid US
Independence Day R-13, and milestone weeks R-18 and R12.

So potential weeks look like:

* R-17
* R-14
* R-11
* R-10
* R-9

Nova is in the process of figuring out their date. If we have that, it
would be good to try to avoid an overlap there. Our linked midcycle
session worked out well, but probably better if they don't conflict.

We also need to work out locations. Anyone able and willing to host,
just let me know. We need a facility with wifi, able to hold ~30-40
people, wifi, close to an airport. And wifi.

At some point I still think it would be nice for our international folks
to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
Collins our somewhere similar.

Thanks!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Bhandaru, Malini K
Hi Everyone!

Intel would be pleased to host the Nova midcycle meetup either at San 
Antonio, Texas or Hillsboro, Oregon during R-15 (June 20-24) or R-11 (July 
18-22) as preferred by the Nova community.

Regards
Malini 

 Forwarded Message 
Subject:Re: [openstack-dev] [nova] Newton midcycle planning
Date:   Tue, 12 Apr 2016 08:54:17 +1000
From:   Michael Still 
Reply-To:   OpenStack Development Mailing List (not for usage questions)

To: OpenStack Development Mailing List (not for usage questions)




On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann > wrote:

A few people have been asking about planning for the nova midcycle
for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
R-11 work the best. R-14 is close to the US July 4th holiday, R-13
is during the week of the US July 4th holiday, and R-12 is the week
of the n-2 milestone.

R-16 is too close to the summit IMO, and R-10 is pushing it out too
far in the release. I'd be open to R-14 though but don't know what
other people's plans are.

As far as a venue is concerned, I haven't heard any offers from
companies to host yet. If no one brings it up by the summit, I'll
see if hosting in Rochester, MN at the IBM site is a possibility.


Intel at Hillsboro had expressed an interest in hosting the N mid-cycle last 
release, so they might still be an option? I don't recall any other possible 
hosts in the queue, but its possible I've missed someone.

Michael

--
Rackspace Australia



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what needs to 
connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat&  docker COE&  neutron sfc to produce a 
final set of deployment scripts and then just runs it through the meat grinder. :)

It would be awesome to use. It may be very difficult to implement.


+1 it will not be easy.

Although the complexity of it is probably no different than what a SQL 
engine has to do to parse a SQL statement into a actionable plan, just 
in this case it's a application deployment 'statement' and the 
realization of that plan are of course where the 'meat' of the program is.


It would be nice to connect what neutron SFC stuff is being worked 
on/exists and have a single project for this kind of stuff, but maybe I 
am dreaming to much right now :-P




If you ignore the non container use case, I think it might be fairly easily 
mappable to all three COE's though.

Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Fox, Kevin M wrote:

I think part of the problem is containers are mostly orthogonal to vms/bare 
metal. Containers are a package for a single service. Multiple can run on a 
single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
pool of vm's/bare metal into a system that can easily run multiple containers.



Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

  ---
  components:
  -   label: frontend
  count: 5
  image: ubuntu_vanilla
  requirements: high memory, low disk
  stateless: true
  -   label: database
  count: 3
  image: ubuntu_vanilla
  requirements: high memory, high disk
  stateless: false
  -   label: memcache
  count: 3
  image: debian-squeeze
  requirements: high memory, no disk
  stateless: true
  -   label: zookeeper
  count: 3
  image: debian-squeeze
  requirements: high memory, medium disk
  stateless: false
  backend: VM
  networks:
  -   label: frontend_net
  flavor: "public network"
  associated_with:
  - frontend
  -   label: database_net
  flavor: high bandwidth
  associated_with:
  - database
  -   label: backend_net
  flavor: high bandwidth and low latency
  associated_with:
  - zookeeper
  - memchache
  constraints:
  -   ref: container_only
  params:
  - frontend
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   database
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   memcache
  -   ref: spread
  params:
  -   zookeeper
  -   ref: isolated_network
  params:
  - frontend_net
  - database_net
  - backend_net
  ...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).


So, rather then concern itself with supporting launching through a COE and through Nova, 
which are two totally different code paths, OpenStack advanced services like Trove could 
just use a Magnum COE and have a UI that asks which existing Magnum COE to launch in, or 
alternately kick off the "Launch new Magnum COE" workflow in horizon, then 
follow up with the Trove launch workflow. Trove then would support being able to use 
containers, users could potentially pack more containers onto their vm's then just Trove, 
and it still would work with both Bare Metal and VM's the same way since Magnum can 
launch on either. I'm afraid supporting both containers and non 

Re: [openstack-dev] heat resource-signal

2016-04-12 Thread Steve Baker

On 12/04/16 18:16, Monika Parkar wrote:

Hi,

I am new to the openstack.
I was going through the heat usecase "heat resource-signal", I would 
like to know what kind of signal we can send and how.

I have executed the below command:
# heat resource-signal stack-name resource-name
But internal workflow I am Unable to understand.

Can anybody help me out to understand the workflow of this usecase.

Thanks & Regards,
Monika


This is used to move signal/waitcondition/deployment resources out of 
the IN_PROGRESS state, along with some json data for that resource to 
consume.


The format of the data depends on what resource type you're actually 
using. Can you elaborate on what you're trying to do?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Dolph Mathews
On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad  wrote:

> Keystone's credential API pre-dates barbican. We started talking about
> having the credential API back to barbican after it was a thing. I'm not
> sure if any work has been done to move the credential API in this
> direction. From a security perspective, I think it would make sense for
> keystone to back to barbican.
>

+1

And regarding the "inappropriate use of keystone," I'd agree... without
this spec, keystone is entirely useless as any sort of alternative to
Barbican:

  https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


>
> On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu  wrote:
>
>> Hi all,
>>
>>
>>
>> In short, some Magnum team members proposed to store TLS certificates in
>> Keystone credential store. As Magnum PTL, I want to get agreements (or
>> non-disagreement) from OpenStack community in general, Keystone community
>> in particular, before approving the direction.
>>
>>
>>
>> In details, Magnum leverages TLS to secure the API endpoint of
>> kubernetes/docker swarm. The usage of TLS requires a secure store for
>> storing TLS certificates. Currently, we leverage Barbican for this purpose,
>> but we constantly received requests to decouple Magnum from Barbican
>> (because users normally don’t have Barbican installed in their clouds).
>> Some Magnum team members proposed to leverage Keystone credential store as
>> a Barbican alternative [1]. Therefore, I want to confirm what is Keystone
>> team position for this proposal (I remembered someone from Keystone
>> mentioned this is an inappropriate use of Keystone. Would I ask for further
>> clarification?). Thanks in advance.
>>
>>
>>
>> [1]
>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Fox, Kevin M
I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what needs to 
connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat & docker COE & neutron sfc to produce a 
final set of deployment scripts and then just runs it through the meat grinder. 
:)

It would be awesome to use. It may be very difficult to implement.

If you ignore the non container use case, I think it might be fairly easily 
mappable to all three COE's though.

Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Fox, Kevin M wrote:
> I think part of the problem is containers are mostly orthogonal to vms/bare 
> metal. Containers are a package for a single service. Multiple can run on a 
> single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
> pool of vm's/bare metal into a system that can easily run multiple containers.
>

Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

 ---
 components:
 -   label: frontend
 count: 5
 image: ubuntu_vanilla
 requirements: high memory, low disk
 stateless: true
 -   label: database
 count: 3
 image: ubuntu_vanilla
 requirements: high memory, high disk
 stateless: false
 -   label: memcache
 count: 3
 image: debian-squeeze
 requirements: high memory, no disk
 stateless: true
 -   label: zookeeper
 count: 3
 image: debian-squeeze
 requirements: high memory, medium disk
 stateless: false
 backend: VM
 networks:
 -   label: frontend_net
 flavor: "public network"
 associated_with:
 - frontend
 -   label: database_net
 flavor: high bandwidth
 associated_with:
 - database
 -   label: backend_net
 flavor: high bandwidth and low latency
 associated_with:
 - zookeeper
 - memchache
 constraints:
 -   ref: container_only
 params:
 - frontend
 -   ref: no_colocated
 params:
 -   database
 -   frontend
 -   ref: spread
 params:
 -   database
 -   ref: no_colocated
 params:
 -   database
 -   frontend
 -   ref: spread
 params:
 -   memcache
 -   ref: spread
 params:
 -   zookeeper
 -   ref: isolated_network
 params:
 - frontend_net
 - database_net
 - backend_net
 ...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).

> So, rather then concern itself with supporting launching through a COE and 
> through Nova, which are two totally different code paths, OpenStack advanced 
> services like Trove could just use a Magnum COE and have a UI that asks which 
> existing Magnum COE to launch in, or alternately kick off the "Launch new 
> Magnum COE" workflow in horizon, then follow up with the Trove launch 
> workflow. Trove then would support being able to use containers, users could 
> potentially pack more containers onto their vm's then just Trove, and it 
> still would work with both Bare Metal and VM's the same way since Magnum can 
> launch on either. I'm afraid supporting both containers and non container 
> deployment with Trove will be a large effort with very little code sharing. 
> It may be easiest to have a flag version where non container deployments are 
> upgraded to containers then non container support is dropped.
>

Sure trove seems like it would be a consumer of whatever interprets that
format, just like many other consumers could be (with the special case
that trove creates such a format on-behalf of some other consumer, aka
the trove user).

> As for the app-catalog use case, the app-catalog project 
> 

Re: [openstack-dev] [puppet][ceph] Puppet-ceph is now a formal member of puppet-openstack

2016-04-12 Thread Shinobu Kinjo
Great!

On Tue, Apr 12, 2016 at 6:24 AM, Andrew Woodward  wrote:
> It's been a while since we started the puppet-ceph module on stackforge as a
> friend of OpenStack. Since then Ceph's usage in OpenStack has increased
> greatly and we have both the puppet-openstack deployment scenarios as well
> as check-trippleo running against the module.
>
> We've been receiving leadership from the puppet-openstack team for a while
> now and our small core team has struggled to keep up. As such we have added
> the puppet-openstack cores to the review ACL's in gerrit and have been
> formally added to the puppet-openstack project in governance. [1]
>
> I thank the puppet-openstack team for their support and, I am glad to see
> the module move under their leadership.
>
> [1] https://review.openstack.org/300191
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packstack] Update packstack core list

2016-04-12 Thread Ivan Chavero

Hello,

I would like to step up as PTL if everybody is ok with it.

Cheers,
Ivan

- Original Message -
> From: "Martin Magr" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: "Javier Pena" , "David Moreau Simard" 
> , "Alan Pevec" ,
> "Ivan Chavero" 
> Sent: Tuesday, April 12, 2016 1:52:38 AM
> Subject: Re: [openstack-dev] [packstack] Update packstack core list
> 
> Greetings guys,
> 
> 
>   I will have to step down from PTL responsibility. TBH I haven't have time
>   to work on Packstack lately and I probably won't have in the future
>   because of my other responsibilities. So from my point of view it is not
>   correct to lead the project (though I'd like to contribute/do review from
>   time to time).
> 
> Thanks for understanding,
> Martin
> 
> 
> - Original Message -
> From: "Emilien Macchi" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Cc: "Javier Pena" , "David Moreau Simard"
> 
> Sent: Wednesday, March 16, 2016 3:50:37 PM
> Subject: Re: [openstack-dev] [packstack] Update packstack core list
> 
> On Wed, Mar 16, 2016 at 6:35 AM, Alan Pevec  wrote:
> > 2016-03-16 11:23 GMT+01:00 Lukas Bezdicka :
> >>> ...
> >>> - Martin Mágr
> >>> - Iván Chavero
> >>> - Javier Peña
> >>> - Alan Pevec
> >>>
> >>> I have a doubt about Lukas, he's contributed an awful lot to
> >>> Packstack, just not over the last 90 days. Lukas, will you be
> >>> contributing in the future? If so, I'd include him in the proposal as
> >>> well.
> >>
> >> Thanks, yeah I do plan to contribute just haven't had time lately for
> >> packstack.
> >
> > I'm also adding David Simard who recently contributed integration tests.
> >
> > Since there hasn't been -1 votes for a week, I went ahead and
> > implemented group membership changes in gerrit.
> > Thanks to the past core members, we will welcome you back on the next
> >
> > One more topic to discuss is if we need PTL election? I'm not sure we
> > need formal election yet and de-facto PTL has been Martin Magr, so if
> > there aren't other proposal let's just name Martin our overlord?
> 
> Packstack is not part of OpenStack big tent so de-facto does not need
> a PTL to work.
> It's up to the project team to decide if whether or not a PTL is needed.
> 
> Oh and of course, go ahead Martin ;-)
> 
> > Cheers,
> > Alan
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Armando M.
On 12 April 2016 at 12:36, Anita Kuno  wrote:

> On 04/12/2016 03:23 PM, Armando M. wrote:
> > On 12 April 2016 at 12:16, Matt Riedemann 
> > wrote:
> >
> >>
> >>
> >> On 4/11/2016 11:56 PM, Armando M. wrote:
> >>
> >>> Hi folks,
> >>>
> >>> A provisional schedulefor the Neutron project is available [1]. I am
> >>> still working with the session chairs and going through/ironing out
> some
> >>> details as well as gathering input from [2].
> >>>
> >>> I hope I can get something more final by the end of this week. In the
> >>> meantime, please free to ask questions/provide comments.
> >>>
> >>> Many thanks,
> >>> Armando
> >>>
> >>> [1]
> >>>
> >>>
> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
> >>> [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
> >>>
> >>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >> FYI, I have the nova/neutron cross-project session for Wednesday at 11am
> >> in the schedule:
> >>
> >>
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9089
> >
> >
> > Thanks,
> >
> > Surprisingly this does not show up when searching by Neutron tag, even
> > though I can see the sessions it's been tagged with both Nova and
> Neutron.
> > I wonder if I am doing something wrong.
>
> The title for that session includes "Nova: Neutron"
> So it comes up when searching for Neutron (without the colon) or Nova:
> (with the colon) but not Neutron: (with the colon).
>
> Hopefully the web folks will have this straightened out for Barcelona.
>
>
Thanks Anita!


> Thanks,
> Anita.
>
> >
> >
> >>
> >> --
> >>
> >> Thanks,
> >>
> >> Matt Riedemann
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Fox, Kevin M wrote:

I think part of the problem is containers are mostly orthogonal to vms/bare 
metal. Containers are a package for a single service. Multiple can run on a 
single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
pool of vm's/bare metal into a system that can easily run multiple containers.



Is the orthogonal part a problem because we have made it so or is it 
just how it really is?


Brainstorming starts here:

Imagine a descriptor language like (which I stole from 
https://review.openstack.org/#/c/210549 and modified):


---
components:
-   label: frontend
count: 5
image: ubuntu_vanilla
requirements: high memory, low disk
stateless: true
-   label: database
count: 3
image: ubuntu_vanilla
requirements: high memory, high disk
stateless: false
-   label: memcache
count: 3
image: debian-squeeze
requirements: high memory, no disk
stateless: true
-   label: zookeeper
count: 3
image: debian-squeeze
requirements: high memory, medium disk
stateless: false
backend: VM
networks:
-   label: frontend_net
flavor: "public network"
associated_with:
- frontend
-   label: database_net
flavor: high bandwidth
associated_with:
- database
-   label: backend_net
flavor: high bandwidth and low latency
associated_with:
- zookeeper
- memchache
constraints:
-   ref: container_only
params:
- frontend
-   ref: no_colocated
params:
-   database
-   frontend
-   ref: spread
params:
-   database
-   ref: no_colocated
params:
-   database
-   frontend
-   ref: spread
params:
-   memcache
-   ref: spread
params:
-   zookeeper
-   ref: isolated_network
params:
- frontend_net
- database_net
- backend_net
...


Now nothing in the above is about container, or baremetal or vms, 
(although a 'advanced' constraint can be that a component must be on a 
container, and it must say be deployed via docker image XYZ...); instead 
it's just about the constraints that a user has on there deployment and 
the components associated with it. It can be left up to some consuming 
project of that format to decide how to turn that desired description 
into an actual description (aka a full expanding of that format into an 
actual deployment plan), possibly say by optimizing for density (packing 
as many things container) or optimizing for security (by using VMs) or 
optimizing for performance (by using bare-metal).



So, rather then concern itself with supporting launching through a COE and through Nova, 
which are two totally different code paths, OpenStack advanced services like Trove could 
just use a Magnum COE and have a UI that asks which existing Magnum COE to launch in, or 
alternately kick off the "Launch new Magnum COE" workflow in horizon, then 
follow up with the Trove launch workflow. Trove then would support being able to use 
containers, users could potentially pack more containers onto their vm's then just Trove, 
and it still would work with both Bare Metal and VM's the same way since Magnum can 
launch on either. I'm afraid supporting both containers and non container deployment with 
Trove will be a large effort with very little code sharing. It may be easiest to have a 
flag version where non container deployments are upgraded to containers then non 
container support is dropped.



Sure trove seems like it would be a consumer of whatever interprets that 
format, just like many other consumers could be (with the special case 
that trove creates such a format on-behalf of some other consumer, aka 
the trove user).



As for the app-catalog use case, the app-catalog project 
(http://apps.openstack.org) is working on some of that.

Thanks,
Kevin
  
From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 12:16 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Flavio Percoco wrote:

On 11/04/16 18:05 +, Amrith Kumar wrote:

Adrian, thx for your detailed mail.



Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
think it
was Vancouver), there’s likely no silver bullet in this area. After that
conversation, and some further experimentation, I found that even if
Trove had
access to a single Compute API, there were other significant
complications
further down the road, and I didn’t pursue the project further at the
time.


Adrian, Amrith,

I've spent enough time 

[openstack-dev] [Glance] Finalizing the summit schedule

2016-04-12 Thread Nikhil Komawar
Hello everyone,

Firstly, thanks to those who were able to make it to the virtual
pre-summit sync today. The discussions went well and we already have
decent amount of feedback on most of the proposals.

Based on the discussions, feedback and other input, I've created the
tentative summit schedule at the top of the etherpad [1]. Please take a
look and let me know if something should be strikingly different. I am
planning to post this up later today so that it will appear in the
official OpenStack summit schedule page [2].

[1] https://etherpad.openstack.org/p/newton-glance-summit-planning
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Ability to reproduce failures

2016-04-12 Thread Gabriele Cerami
On Fri, 2016-04-08 at 16:18 +0100, Steven Hardy wrote:

> Note we're not using devtest at all anymore, the developer script
> many
> folks use is tripleo.sh:

So, I followed the flow of the gate jobs starting from jenkins builder
script, and it seems like it's using devtest (or maybe something I
consider to be devtest but it's not, is devtest the part that creates
some environments, wait for them to be locked by gearman, and so on ?)

What I meant with "the script I'm using (created by Sagi) is not
creating the same enviroment" is that is not using the same test env
(with gearman and such) that the ci scripts are currently using.

I'm trying to gather all the information I find in this etherpad:

https://etherpad.openstack.org/p/tripleo-ci-onboarding

If someone could review it, it might help me now, and others that wish
to join the effort later.

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project] Meeting SKIPPED

2016-04-12 Thread Mike Perez
Hi all!

We will be skipping the cross-project meeting since there are no agenda items
to discuss, but someone can add one [1] to call a meeting next time. Thanks!

[1] - 
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Newton design summit sessions are in the schedule

2016-04-12 Thread Matt Riedemann

The Nova design summit sessions are now in the schedule [1].

Some of the session descriptions might need some tweaking yet, like 
adding etherpad links for the specific sessions, but this is the 
schedule at least.


[1] 
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Nova%3A+


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc] Meeting time preferences for OSC team

2016-04-12 Thread Richard Theis
E.1 and O.3 work for me.

Richard

Dean Troyer  wrote on 04/12/2016 03:06:20 PM:

> From: Dean Troyer 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 04/12/2016 03:08 PM
> Subject: Re: [openstack-dev] [osc] Meeting time preferences for OSC team
> 
> On Thu, Apr 7, 2016 at 1:52 AM, Sheel Rana Insaan 
 > wrote:
> Common votes seems for below time options
> 
> E.4 - Thursday at 1600 UTC in #openstack-meeting (IRC webclient)
> O.3 - Thursday at 1900 UTC in #openstack-meeting (IRC webclient)
> 
> For reference, here are the proposed times again:
> 
> E.1 Every two weeks (on even weeks) on Thursday at 1300 UTC in 
> #openstack-meeting (IRC webclient)
> E.2 Every two weeks (on even weeks) on Tuesday at 1500 UTC in 
> #openstack-meeting (IRC webclient)
> E.3 Every two weeks (on even weeks) on Friday at 1500 UTC in 
> #openstack-meeting-4 (IRC webclient)
> E.4 Every two weeks (on even weeks) on Thursday at 1600 UTC in 
> #openstack-meeting (IRC webclient)
> O.1 Every two weeks (on odd weeks) on Tuesday at 1400 UTC in 
> #openstack-meeting-4 (IRC webclient)
> O.2 Every two weeks (on odd weeks) on Wednesday at 1400 UTC in 
> #openstack-meeting-3 (IRC webclient)
> O.3 Every two weeks (on odd/even weeks) on Thursday at 1900 UTC in 
> #openstack-meeting (IRC webclient) – our regular meeting time
> In last week's meeting agenda, Tang suggested changing O.3 to O.1 to
> better suit his timezone.  Rather than move both meetings earlier in
> the day, it seems like having a bit more separation gives us better 
> coverage.  In the initial poll, E.1 was an acceptable alternate to 
> most who responded, I'd propose that we look at E.1 and O.3 (the 
> current time) to get better coverage.
> 
> How does this sound?  Tang, you specifically had E.4 mentioned, does
> earlier on even weeks not work for you?
> 
> dt
> 
> -- 
> 
> Dean Troyer
> dtro...@gmail.com
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Lance Bragstad
Keystone's credential API pre-dates barbican. We started talking about
having the credential API back to barbican after it was a thing. I'm not
sure if any work has been done to move the credential API in this
direction. From a security perspective, I think it would make sense for
keystone to back to barbican.

On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu  wrote:

> Hi all,
>
>
>
> In short, some Magnum team members proposed to store TLS certificates in
> Keystone credential store. As Magnum PTL, I want to get agreements (or
> non-disagreement) from OpenStack community in general, Keystone community
> in particular, before approving the direction.
>
>
>
> In details, Magnum leverages TLS to secure the API endpoint of
> kubernetes/docker swarm. The usage of TLS requires a secure store for
> storing TLS certificates. Currently, we leverage Barbican for this purpose,
> but we constantly received requests to decouple Magnum from Barbican
> (because users normally don’t have Barbican installed in their clouds).
> Some Magnum team members proposed to leverage Keystone credential store as
> a Barbican alternative [1]. Therefore, I want to confirm what is Keystone
> team position for this proposal (I remembered someone from Keystone
> mentioned this is an inappropriate use of Keystone. Would I ask for further
> clarification?). Thanks in advance.
>
>
>
> [1]
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc] Meeting time preferences for OSC team

2016-04-12 Thread Dean Troyer
On Thu, Apr 7, 2016 at 1:52 AM, Sheel Rana Insaan 
wrote:

> Common votes seems for below time options
>
> E.4 - Thursday at 1600 UTC in #openstack-meeting (IRC webclient)
> O.3 - Thursday at 1900 UTC in #openstack-meeting (IRC webclient)
>

For reference, here are the proposed times again:


   - E.1 Every two weeks (on even weeks) on Thursday at 1300 UTC in
   #openstack-meeting (IRC webclient)
   - E.2 Every two weeks (on even weeks) on Tuesday at 1500 UTC in
   #openstack-meeting (IRC webclient)
   - E.3 Every two weeks (on even weeks) on Friday at 1500 UTC in
   #openstack-meeting-4 (IRC webclient)
   - E.4 Every two weeks (on even weeks) on Thursday at 1600 UTC in
   #openstack-meeting (IRC webclient)


   - O.1 Every two weeks (on odd weeks) on Tuesday at 1400 UTC in
   #openstack-meeting-4 (IRC webclient)
   - O.2 Every two weeks (on odd weeks) on Wednesday at 1400 UTC in
   #openstack-meeting-3 (IRC webclient)
   - O.3 Every two weeks (on odd/even weeks) on Thursday at 1900 UTC in
   #openstack-meeting (IRC webclient) – our regular meeting time

In last week's meeting agenda, Tang suggested changing O.3 to O.1 to better
suit his timezone.  Rather than move both meetings earlier in the day, it
seems like having a bit more separation gives us better coverage.  In the
initial poll, E.1 was an acceptable alternate to most who responded, I'd
propose that we look at E.1 and O.3 (the current time) to get better
coverage.

How does this sound?  Tang, you specifically had E.4 mentioned, does
earlier on even weeks not work for you?

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-12 Thread Steve Gordon
- Original Message -
> From: "Jeff Peeler" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> On Mon, Apr 11, 2016 at 3:37 AM, Steven Dake (stdake) 
> wrote:
> > Hey folks,
> >
> > The reviewers in Kolla tend to nit-pick the quickstart guide to death
> > during
> > reviews.  I'd like to keep that high bar in place for the QSG, because it
> > is
> > our most important piece of documentation at present.  However, when new
> > contributors see the nitpicking going on in reviews, I think they may get
> > discouraged about writing documentation for other parts of Kolla.
> >
> > I'd prefer if the core reviewers held a lower bar for docs not related to
> > the philosophy or quiickstart guide document.  We can always iterate on
> > these new documents (like the operator guide) to improve them and raise the
> > bar on their quality over time, as we have done with the quickstart guide.
> > That way contributors don't feel nitpicked to death and avoid improving the
> > documentation.
> >
> > If you are a core reveiwer and agree with this approach please +1, if not
> > please –1.
> 
> I'm fine with relaxing the reviews on documentation. However, there's
> a difference between having a missed comma versus the whole patch
> being littered with misspellings. In general in the former scenario I
> try to comment and leave the code review set at 0, hoping the
> contributor fixes it. The danger is that a 0 vote people sometimes
> miss, but it doesn't block progress.

My typical experience with (very) occasional drive by commits to operational 
project docs (albeit not Kolla) is that the type of nit that comes up is more 
typically -1 thanks for adding X, can you also add Y and Z. Before you know it 
a simple drive by commit to flesh out one area has become an expectation to 
write an entire chapter.

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-12 Thread Matt Riedemann



On 4/6/2016 6:15 AM, Mikhail Fedosin wrote:

Hello! Thanks for bring this topic up.

First of all, as I mentioned before, the great work was done in Mitaka,
so Glance v2 adoption in Nova it is not a question of "if", and not even
a question of "when" (in Newton), but the question of "how".

There is a set of commits that make this trick:
1. Xen plugin
https://review.openstack.org/#/c/266933
Sean gave us several good suggestions how we can improve it. In short:

  * Make this only add new glance method calls *upload_vhd_glancev2
**download_vhd_glancev2*which do the v2 work
  * Don't refactor existing code to do common code here, copy / paste /
update instead. We want the final code to be optimized for v1
delete, not for v1 fixing (it was done in previous patchsets, but
then I made the refactor to reduce the amount of code)

2. 'Show' image info
https://review.openstack.org/#/c/228578
Another 'schema-based' handler is added there. It transforms glance v2
image output to format adopted in nova.image.

We have to take in account that image properties in v1 are passed with
http headers which makes them case insensetive. In v2 image info is
passed as json document and 'MyProperty' and 'myproperty' are two
different properties. Thanks Brian Rosmaita who noticed it
http://lists.openstack.org/pipermail/openstack-dev/2016-February/087519.html

Also in v1 user can create custom properties like 'owner' or
'created_at' and they are stored in special dictionary 'properties'. v2
images have flat structure, which means that all custom properties are
located on the same level as base properties. It leads to the fact if v1
image has a custom property that has name coincided with the name of
base property, then this property will be ignored in v2.

3. Listing of artifacts in v2 way
https://review.openstack.org/#/c/238309
There I added additional handlers that transforms v1 image filters in
v2, along with sorting parameters.

'download' and 'delete' patches are included in #238309 since they are
trivial

4. 'creating' and 'updating' images'
https://review.openstack.org/#/c/259097

What were added there:

  * transformation to 2-stepped image creation (creation of instance in
db + file uploading)
  * special handler for creation active images with size '0' without
image data
  * the ability to set custom location for an image
('show_multiple_locations' option must be enabled in glance config
for doing that)
  * special handler to remove custom properties from the image:
purge_props flag in v1 vs. props_to_remove list in v2

What else has to be done:

  * Splitting in 2 patches is required: 'create' and 'update' to make it
easier to review.
  * Matt suggested that it's better not to hardcode method logic for v1
and v2 apis. But rather we should create a a common base class which
is subclassed for v1/v2 specific callback (abstract) methods, and
then we could have a factory that, given the version, provides the
client impl we're going to deal with.


If we're going to literally delete all of the 'if version == 1' paths in 
the nova code in a couple of releases from now, maybe this doesn't 
matter, it just seemed cleaner to me as a way to abstract the common 
parts and subclass the version-specific handling.




5. Also we have a bug: https://bugs.launchpad.net/nova/+bug/1539698

Thanks Samuel Matzek who found it. There is a fix
https://review.openstack.org/#/c/274203/ , but it has contradictory
opinions. If you can suggest a better solution, then I'll be happy :)


If you have any questions about how it was done feel free to send me
emails (mfedo...@mirantis.com ) or ping me
in IRC (mfedosin)

And finally I really want to thank you all for supporting this
transition to v2 - it's a big update for OpenStack and without community
help it cannot be done.

Best regards,
Mikhail Fedosin




On Wed, Apr 6, 2016 at 9:35 AM, Nikhil Komawar > wrote:

Inline comment.

On 4/1/16 10:16 AM, Sean Dague wrote:
 > On 04/01/2016 10:08 AM, Monty Taylor wrote:
 >> On 04/01/2016 08:45 AM, Sean Dague wrote:
 >>> The glance v2 work is currently blocked as there is no active spec,
 >>> would be great if someone from the glance team could get that
rolling
 >>> again.
 >>>
 >>> I started digging back through the patches in detail to figure
out if
 >>> there are some infrastructure bits we could get in early
regardless.
 >>>
 >>> #1 - new methods for glance xenserver plugin
 >>>
 >>> Let's take a simplified approach on this patch -
 >>> https://review.openstack.org/#/c/266933 and only change the
 >>> xenapi/etc/xapi.d/plugins/ content in the following ways.
 >>>
 >>> - add upload/download_vhd_glance2 methods. Don't add an api
parameter.
 

Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-12 Thread Sean Dague
On 04/12/2016 03:37 PM, Matt Riedemann wrote:
> 
> 
> On 4/1/2016 8:45 AM, Sean Dague wrote:
>> The glance v2 work is currently blocked as there is no active spec,
>> would be great if someone from the glance team could get that rolling
>> again.
>>
>> I started digging back through the patches in detail to figure out if
>> there are some infrastructure bits we could get in early regardless.
>>
>> #1 - new methods for glance xenserver plugin
>>
>> Let's take a simplified approach on this patch -
>> https://review.openstack.org/#/c/266933 and only change the
>> xenapi/etc/xapi.d/plugins/ content in the following ways.
>>
>> - add upload/download_vhd_glance2 methods. Don't add an api parameter.
>> Add these methods mostly via copy/paste as we're optimizing for deleting
>> v1 not for fixing v1.
> 
> How are we planning on deleting the v1 image API? That would also mean
> deleting Nova's images API which is a proxy for glance v1. I didn't
> think we deleted Nova APIs? We can certainly deprecate it once we have
> glance v2 support.

This is pretty specific here in reference to xenserver plugin as listed
in that patch. After some point we'll just delete all the things that
talk glance v1 in it, and you have to have glance v2 to work. That will
drop a bunch of untested code.

That being said, in general I think this still holds. The Glance team
wants to delete their v1 API entirely. We should be thinking about how
the Nova code ends up such that it's easy to delete all the v1
interfacing code in our tree the cycle when Glance does that to get rid
of the debt. So abstraction models are way less interesting than a very
high level v1 / v2 branch, and all the code being distinct and clean for
each path.

-Sean

>> That will put some infrastructure in place so we can just call the v2
>> actions based on decision from higher up the stack.
>>
>> #2 - move discover major version back to glanceclient -
>> https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L108
>>
>>
>> I don't understand why this was ever in nova. This really should be
>>
>> glanceclient.discover... something. It uses internal methods from
>> glanceclient and internal structures of the content returned.
>>
>> Catching, if desired, should also be on the glanceclient side.
>> glanceclient.reset_version() could exist to clear any caching.
>>
>> #3 - Ideally we'd also have a
>>
>> client = glanceclient.AutoClient(endpoint, ... ) which basically does
>> glanceclient.discover and returns us the right client automatically.
>> client.version provides access to the version information if you need to
>> figure out what version of a client you have.
>>
>>
>> This starts to get to a point where the parts of versioning that
>> glanceclient should know about are in glanceclient, and when nova still
>> needs to know things it can as for client.version.
>>
>> For instance make _extract_query_params -
>> https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L448
>>
>> become and instance method that can
>>
>> if self._client.version >= 2:
>> ...
>> else:
>> ...
>>
>>
>> This isn't the whole story to get us home, however chunking up some of
>> these pieces I think makes getting the rest of the story in much
>> simpler. In nearly every case (except for the alt link in the image
>> view) we can easily have access to a real glance client. And the code
>> will be a ton easier to understand with some of the glanceclient
>> specific details behind the glanceclient interface.
>>
>> -Sean
>>
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Fox, Kevin M
I think part of the problem is containers are mostly orthogonal to vms/bare 
metal. Containers are a package for a single service. Multiple can run on a 
single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
pool of vm's/bare metal into a system that can easily run multiple containers.

So, rather then concern itself with supporting launching through a COE and 
through Nova, which are two totally different code paths, OpenStack advanced 
services like Trove could just use a Magnum COE and have a UI that asks which 
existing Magnum COE to launch in, or alternately kick off the "Launch new 
Magnum COE" workflow in horizon, then follow up with the Trove launch workflow. 
Trove then would support being able to use containers, users could potentially 
pack more containers onto their vm's then just Trove, and it still would work 
with both Bare Metal and VM's the same way since Magnum can launch on either. 
I'm afraid supporting both containers and non container deployment with Trove 
will be a large effort with very little code sharing. It may be easiest to have 
a flag version where non container deployments are upgraded to containers then 
non container support is dropped.

As for the app-catalog use case, the app-catalog project 
(http://apps.openstack.org) is working on some of that.

Thanks,
Kevin
 
From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 12:16 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Flavio Percoco wrote:
> On 11/04/16 18:05 +, Amrith Kumar wrote:
>> Adrian, thx for your detailed mail.
>>
>>
>>
>> Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
>> think it
>> was Vancouver), there’s likely no silver bullet in this area. After that
>> conversation, and some further experimentation, I found that even if
>> Trove had
>> access to a single Compute API, there were other significant
>> complications
>> further down the road, and I didn’t pursue the project further at the
>> time.
>>
>
> Adrian, Amrith,
>
> I've spent enough time researching on this area during the last month
> and my
> conclusion is pretty much the above. There's no silver bullet in this
> area and
> I'd argue there shouldn't be one. Containers, bare metal and VMs differ
> in such
> a way (feature-wise) that it'd not be good, as far as deploying
> databases goes,
> for there to be one compute API. Containers allow for a different
> deployment
> architecture than VMs and so does bare metal.

Just some thoughts from me, but why focus on the
compute/container/baremetal API at all?

I'd almost like a way that just describes how my app should be
interconnected, what is required to get it going, and the features
and/or scheduling requirements for different parts of those app.

To me it feels like this isn't a compute API or really a heat API but
something else. Maybe it's closer to the docker compose API/template
format or something like it.

Perhaps such a thing needs a new project. I'm not sure, but it does feel
like that as developers we should be able to make such a thing that
still exposes the more advanced functionality of the underlying API so
that it can be used if really needed...

Maybe this is similar to an app-catalog, but that doesn't quite feel
like it's the right thing either so maybe somewhere in between...

IMHO I'd be nice to have a unified story around what this thing is, so
that we as a community can drive (as a single group) toward that, maybe
this is where the product working group can help and we as a developer
community can also try to unify behind...

P.S. name for project should be 'silver' related, ha.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Hongbin Lu
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particular, before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for storing 
TLS certificates. Currently, we leverage Barbican for this purpose, but we 
constantly received requests to decouple Magnum from Barbican (because users 
normally don't have Barbican installed in their clouds). Some Magnum team 
members proposed to leverage Keystone credential store as a Barbican 
alternative [1]. Therefore, I want to confirm what is Keystone team position 
for this proposal (I remembered someone from Keystone mentioned this is an 
inappropriate use of Keystone. Would I ask for further clarification?). Thanks 
in advance.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About snapshot Rollback?

2016-04-12 Thread Erlon Cruz
I didn't see that mention. You mean about legacy volumes and snapshots?

On Mon, Apr 11, 2016 at 3:58 PM, Duncan Thomas 
wrote:

> Ok, you're right about device naming by UUID.
>
> So we have two advantages compared to the existing system:
>
> - Keeping the same volume id (and therefore disk UUID) makes reverting a
> VM much easier since device names inside the instance stay the same
> - Can significantly reduce the amount of copying required on some backends
>
> These do seem like solid reasons to consider the feature.
>
> If you can solve the backwards compatibility problem mentioned further up
> this thread, then I think there's a strong case for considering adding this
> API.
>
> The next step is a spec and a PoC implementation.
>
>
>
> On 11 April 2016 at 20:57, Erlon Cruz  wrote:
>
>> You are right, the instance should be shutdown or the device be
>> unmounted, before 'revert' or removing the old device. That should be
>> enough to avoid corruption. I think the device naming is not a problem if
>> you use the same volume (at least the disk UUID will be the same).
>>
>> On Mon, Apr 11, 2016 at 2:39 PM, Duncan Thomas 
>> wrote:
>>
>>> You can't just change the contents of a volume under the instance though
>>> - at the very least you need to do an unmount in the instance, and a detach
>>> is preferable, otherwise you've got data corruption issues.
>>>
>>> At that point, the device naming problems are identical.
>>>
>>> On 11 April 2016 at 20:22, Erlon Cruz  wrote:
>>>
 The actual user workflow is:

  1 - User creates a volume(s)
  2 - User attach volume to instance
  3 - User creates a snapshot
  4 - Something happens causing the need of a revert
  5 - User creates a volume(s) from the snapshot(s)
  6 - User detach old volumes
  7 - User attach new volumes (and pray so they get the same id) - Nova,
 should have the ability to honor supplied device names (vdc, vdd, etc),
 which not always happen[1]. But, does the volume keep the same UUID in the
 system? Several application use that to boot.

 The suggested workflow would be simpler for a user POV:

  1 - User creates a volume(s)
  2 - User attach volume to instance
  3 - User creates a snapshot
  4 - Something happens causing the need of a revert
  5 - User revert snapshot(s)


  [1] https://goo.gl/Kusfne

 On Fri, Apr 8, 2016 at 5:07 AM, Ivan Kolodyazhny 
 wrote:

> Hi Chenzongliang,
>
> I still don't understand what is difference between proposed feature
> and 'restore volume from snapshot'? Could you please explain it?
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Thu, Apr 7, 2016 at 12:00 PM, Chenzongliang <
> chenzongli...@huawei.com> wrote:
>
>> Dear Cruz:
>>
>>
>>
>>  Thanks for you kind support, I will review the previous spec
>> according to the following links.  May be more user scenario we should
>> considered,such as backup,create volume from snapshot,consistency group 
>> and
>> etc,we will spend some time to gather
>>
>> the user's scenarios and determin what to do next step.
>>
>>
>>
>> Sincerely,
>>
>> zongliang chen
>>
>>
>>
>> *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
>> *发送时间:* 2016年4月5日 2:50
>> *收件人:* OpenStack Development Mailing List (not for usage questions)
>> *抄送:* Zhangli (ISSP); Shenhong (C)
>> *主题:* Re: [openstack-dev] [Cinder] About snapshot Rollback?
>>
>>
>>
>> Hi Chen,
>>
>>
>>
>> Not sure if I got you right but I brought this topic in
>> #openstack-cinder some days ago. The idea is to be able to rollback a
>> snapshot in Cinder. Today what is possible to do is to create a volume 
>> from
>> a snapshot. From the user point of view, this is not ideal, as there are
>> several cases, if not the majority of, that the purpose of the snapshot 
>> is
>> to revert to a desired state, and not keep the original volume. For some
>> backends, keeping the original volume means space consumption. This space
>> problem becomes bold when we think about consistency groups. For
>> consistency groups, some backends might have to copy an entire filesystem
>> for each snapshot, consuming space and time. So, I think it would be
>> desired to have the ability to revert snapshots.
>>
>>
>>
>> I know there have been efforts in the past[1] to implement that, but
>> for some reason the work was stopped. If you want to retake the effort
>> please create a spec[2]  sol everybody can provide feedback.
>>
>>
>>
>> Erlon
>>
>>
>>
>>
>>
>> [1]
>> 

Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-12 Thread Matt Riedemann



On 4/1/2016 8:45 AM, Sean Dague wrote:

The glance v2 work is currently blocked as there is no active spec,
would be great if someone from the glance team could get that rolling again.

I started digging back through the patches in detail to figure out if
there are some infrastructure bits we could get in early regardless.

#1 - new methods for glance xenserver plugin

Let's take a simplified approach on this patch -
https://review.openstack.org/#/c/266933 and only change the
xenapi/etc/xapi.d/plugins/ content in the following ways.

- add upload/download_vhd_glance2 methods. Don't add an api parameter.
Add these methods mostly via copy/paste as we're optimizing for deleting
v1 not for fixing v1.


How are we planning on deleting the v1 image API? That would also mean 
deleting Nova's images API which is a proxy for glance v1. I didn't 
think we deleted Nova APIs? We can certainly deprecate it once we have 
glance v2 support.




That will put some infrastructure in place so we can just call the v2
actions based on decision from higher up the stack.

#2 - move discover major version back to glanceclient -
https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L108

I don't understand why this was ever in nova. This really should be

glanceclient.discover... something. It uses internal methods from
glanceclient and internal structures of the content returned.

Catching, if desired, should also be on the glanceclient side.
glanceclient.reset_version() could exist to clear any caching.

#3 - Ideally we'd also have a

client = glanceclient.AutoClient(endpoint, ... ) which basically does
glanceclient.discover and returns us the right client automatically.
client.version provides access to the version information if you need to
figure out what version of a client you have.


This starts to get to a point where the parts of versioning that
glanceclient should know about are in glanceclient, and when nova still
needs to know things it can as for client.version.

For instance make _extract_query_params -
https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L448
become and instance method that can

if self._client.version >= 2:
...
else:
...


This isn't the whole story to get us home, however chunking up some of
these pieces I think makes getting the rest of the story in much
simpler. In nearly every case (except for the alt link in the image
view) we can easily have access to a real glance client. And the code
will be a ton easier to understand with some of the glanceclient
specific details behind the glanceclient interface.

-Sean



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Anita Kuno
On 04/12/2016 03:23 PM, Armando M. wrote:
> On 12 April 2016 at 12:16, Matt Riedemann 
> wrote:
> 
>>
>>
>> On 4/11/2016 11:56 PM, Armando M. wrote:
>>
>>> Hi folks,
>>>
>>> A provisional schedulefor the Neutron project is available [1]. I am
>>> still working with the session chairs and going through/ironing out some
>>> details as well as gathering input from [2].
>>>
>>> I hope I can get something more final by the end of this week. In the
>>> meantime, please free to ask questions/provide comments.
>>>
>>> Many thanks,
>>> Armando
>>>
>>> [1]
>>>
>>> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
>>> [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
>>>
>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> FYI, I have the nova/neutron cross-project session for Wednesday at 11am
>> in the schedule:
>>
>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9089
> 
> 
> Thanks,
> 
> Surprisingly this does not show up when searching by Neutron tag, even
> though I can see the sessions it's been tagged with both Nova and Neutron.
> I wonder if I am doing something wrong.

The title for that session includes "Nova: Neutron"
So it comes up when searching for Neutron (without the colon) or Nova:
(with the colon) but not Neutron: (with the colon).

Hopefully the web folks will have this straightened out for Barcelona.

Thanks,
Anita.

> 
> 
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-12 Thread Jeff Peeler
On Mon, Apr 11, 2016 at 3:37 AM, Steven Dake (stdake)  wrote:
> Hey folks,
>
> The reviewers in Kolla tend to nit-pick the quickstart guide to death during
> reviews.  I'd like to keep that high bar in place for the QSG, because it is
> our most important piece of documentation at present.  However, when new
> contributors see the nitpicking going on in reviews, I think they may get
> discouraged about writing documentation for other parts of Kolla.
>
> I'd prefer if the core reviewers held a lower bar for docs not related to
> the philosophy or quiickstart guide document.  We can always iterate on
> these new documents (like the operator guide) to improve them and raise the
> bar on their quality over time, as we have done with the quickstart guide.
> That way contributors don't feel nitpicked to death and avoid improving the
> documentation.
>
> If you are a core reveiwer and agree with this approach please +1, if not
> please –1.

I'm fine with relaxing the reviews on documentation. However, there's
a difference between having a missed comma versus the whole patch
being littered with misspellings. In general in the former scenario I
try to comment and leave the code review set at 0, hoping the
contributor fixes it. The danger is that a 0 vote people sometimes
miss, but it doesn't block progress.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-12 Thread Dan Prince
On Mon, 2016-04-11 at 05:54 -0400, John Trowbridge wrote:
> Hola OOOers,
> 
> It came up in the meeting last week that we could benefit from a CI
> subteam with its own meeting, since CI is taking up a lot of the main
> meeting time.
> 
> I like this idea, and think we should do something similar for the
> other
> informal subteams (tripleoclient, UI), and also add a new subteam for
> tripleo-quickstart (and maybe one for releases?).
> 
> We should make seperate ACL's for these subteams as well. The
> informal
> approach of adding cores who can +2 anything but are told to only +2
> what they know doesn't scale very well.

+1 for some subteams. I think they make good sense for projects which
are adopted upstream into projects like TripleO. I wouldn't say we have
to go crazy and do it for everything but for projects like quickstart
it seems fine to me.

Dan

> 
> - trown
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Armando M.
On 12 April 2016 at 12:16, Matt Riedemann 
wrote:

>
>
> On 4/11/2016 11:56 PM, Armando M. wrote:
>
>> Hi folks,
>>
>> A provisional schedulefor the Neutron project is available [1]. I am
>> still working with the session chairs and going through/ironing out some
>> details as well as gathering input from [2].
>>
>> I hope I can get something more final by the end of this week. In the
>> meantime, please free to ask questions/provide comments.
>>
>> Many thanks,
>> Armando
>>
>> [1]
>>
>> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
>> [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
>>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> FYI, I have the nova/neutron cross-project session for Wednesday at 11am
> in the schedule:
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9089


Thanks,

Surprisingly this does not show up when searching by Neutron tag, even
though I can see the sessions it's been tagged with both Nova and Neutron.
I wonder if I am doing something wrong.


>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [opnfv-tech-discuss][NFV] BoF on NFV Orchestration

2016-04-12 Thread Sridhar Ramaswamy
Greetings,

I'm soliciting inputs, and if interested in this topic, your presence for
the upcoming BoF session on NFV Orchestration during the Austin summit,

https://www.openstack.org/summit/austin-2016/summit-schedule/events/8468

We had a packed session for a similar event in the Tokyo summit with many
great inputs shared on this topic. I've created an etherpad with few
cross-organizational topics to get the discussion going,


https://etherpad.openstack.org/p/austin-2016-nfv-orchestration-bof

Please share your thoughts, either here in the ML or etherpad to make this
event a success.

thanks,
Sridhar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Flavio Percoco wrote:

On 11/04/16 18:05 +, Amrith Kumar wrote:

Adrian, thx for your detailed mail.



Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
think it
was Vancouver), there’s likely no silver bullet in this area. After that
conversation, and some further experimentation, I found that even if
Trove had
access to a single Compute API, there were other significant
complications
further down the road, and I didn’t pursue the project further at the
time.



Adrian, Amrith,

I've spent enough time researching on this area during the last month
and my
conclusion is pretty much the above. There's no silver bullet in this
area and
I'd argue there shouldn't be one. Containers, bare metal and VMs differ
in such
a way (feature-wise) that it'd not be good, as far as deploying
databases goes,
for there to be one compute API. Containers allow for a different
deployment
architecture than VMs and so does bare metal.


Just some thoughts from me, but why focus on the 
compute/container/baremetal API at all?


I'd almost like a way that just describes how my app should be 
interconnected, what is required to get it going, and the features 
and/or scheduling requirements for different parts of those app.


To me it feels like this isn't a compute API or really a heat API but 
something else. Maybe it's closer to the docker compose API/template 
format or something like it.


Perhaps such a thing needs a new project. I'm not sure, but it does feel 
like that as developers we should be able to make such a thing that 
still exposes the more advanced functionality of the underlying API so 
that it can be used if really needed...


Maybe this is similar to an app-catalog, but that doesn't quite feel 
like it's the right thing either so maybe somewhere in between...


IMHO I'd be nice to have a unified story around what this thing is, so 
that we as a community can drive (as a single group) toward that, maybe 
this is where the product working group can help and we as a developer 
community can also try to unify behind...


P.S. name for project should be 'silver' related, ha.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Matt Riedemann



On 4/11/2016 11:56 PM, Armando M. wrote:

Hi folks,

A provisional schedulefor the Neutron project is available [1]. I am
still working with the session chairs and going through/ironing out some
details as well as gathering input from [2].

I hope I can get something more final by the end of this week. In the
meantime, please free to ask questions/provide comments.

Many thanks,
Armando

[1]
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
[2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



FYI, I have the nova/neutron cross-project session for Wednesday at 11am 
in the schedule:


https://www.openstack.org/summit/austin-2016/summit-schedule/events/9089

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Dedicated CI job for live-migration testing status update

2016-04-12 Thread Timofei Durakov
Hello,

Separate job for testing live-migration with different storage
configurations(No shared storage/NFS/Ceph was implemented during Mitaka and
available to be executed using experimental pipeline.

The job is ready but shows same stability as
gate-tempest-dsvm-multinode-full.
Bugs like [1] [2] affects its stability. So the main idea for now it to
make this job running against latest libvirt/qemu versions. Markus Zoeller
and Tony Breeds are working on implementation of devstack plugin for that
[3]. So once plugin ready it will be possible to check. Another option for
this is to use Xenial images in experimental job, which will contain newer
libvirt/qemu version. Infra team already add ability to use 16.04 for
experimental[4], so I'm going to try this approach.

Work items:
- Cover negative testcases for live-migration(to check that all rollback
logic works well) - in progress;
- Check state of VM after migration
- Live migration for instance under workload

Timofey.

[1] - https://bugs.launchpad.net/nova/+bug/1535232
[2] - https://bugs.launchpad.net/nova/+bug/1524898
[3] -
https://review.openstack.org/#/q/project:openstack/devstack-plugin-additional-pkg-repos
[4] - https://review.openstack.org/#/c/302949/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Armando M.
On 12 April 2016 at 07:08, Michael Johnson  wrote:

> Armando,
>
> Is there any way we can move the "Neutron: Development track: future
> of *-aas projects" session?  I overlaps with the LBaaS talk:
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/6893?goback=1
>
> Michael
>

Swapped with the first slot of the day. I also loaded etherpads here:

https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Neutron

Cheers,
Armando

>
>
> On Mon, Apr 11, 2016 at 9:56 PM, Armando M.  wrote:
> > Hi folks,
> >
> > A provisional schedule for the Neutron project is available [1]. I am
> still
> > working with the session chairs and going through/ironing out some
> details
> > as well as gathering input from [2].
> >
> > I hope I can get something more final by the end of this week. In the
> > meantime, please free to ask questions/provide comments.
> >
> > Many thanks,
> > Armando
> >
> > [1]
> >
> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
> > [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][ceph] Puppet-ceph is now a formal member of puppet-openstack

2016-04-12 Thread David Moreau Simard
I'm definitely glad we got puppet-ceph to where it is today and I have
complete trust in the Puppet-OpenStack team to take it to the next
level.

With that announcement, I have to do one of my own -- I am stepping
down as a puppet-ceph core.

My day-to-day work does not involve Ceph anymore and as such my
knowledge about it has stagnated.
Doing thoughtful and informed reviews has been taking more time and
effort that I can afford right now.

I am still going to be around and feel free to poke me whenever necessary!

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Mon, Apr 11, 2016 at 5:24 PM, Andrew Woodward  wrote:
> It's been a while since we started the puppet-ceph module on stackforge as a
> friend of OpenStack. Since then Ceph's usage in OpenStack has increased
> greatly and we have both the puppet-openstack deployment scenarios as well
> as check-trippleo running against the module.
>
> We've been receiving leadership from the puppet-openstack team for a while
> now and our small core team has struggled to keep up. As such we have added
> the puppet-openstack cores to the review ACL's in gerrit and have been
> formally added to the puppet-openstack project in governance. [1]
>
> I thank the puppet-openstack team for their support and, I am glad to see
> the module move under their leadership.
>
> [1] https://review.openstack.org/300191
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Hayes, Graham
On 12/04/2016 18:39, Jeremy Stanley wrote:
> On 2016-04-01 15:50:57 + (+), Hayes, Graham wrote:
>> If a team has already done a TA (e.g. as part of an internal
>> product TA) (and produced all the documentation) would this meet
>> the requirements?
>>
>> I ask, as Designate looks like it meets nearly  all the current
>> requirements - the only outstanding question in my mind was the
>> Threat Analysis
>
> Seems fine to me, though in the interest of openness that
> documentation should probably be licensed such that it can be
> published somewhere for the whole community to read (once any
> glaring deficiencies are addressed anyway).
>

Definitely - I have a request in to open the analysis for public
distribution.

My feeling is that it should land somewhere in our docs, and be
covered by the same license as them. (I *think* that is MIT for
us)

- Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] The same SRIOV / NFV CI failures missed a regression, why?

2016-04-12 Thread Jeremy Stanley
On 2016-04-05 00:45:20 -0400 (-0400), Jay Pipes wrote:
> The proposal is to have the hardware companies donate hardware and
> sysadmins to setup and maintain a *single* third-party CI lab
> environment running the *upstream infra CI toolset* in one
> datacenter at first, moving to multiple datacenters eventually.
> This lab environment would contain hardware that the vendors
> intend to ensure is functionally tested in certain projects --
> mostly Nova and Neutron around specialized PCI devices and SR-IOV
> NICs that have zero chance of being tested functionally in the
> cloudy gate CI environments.

This is great. I always love to see increased testing of OpenStack
(and often more insights come from setting up the test environment
and designing the tests than result from running them on proposed
changes later).

> The thing I am proposing the upstream Infra team members would be
> responsible for is guiding/advising on the creation and
> installation of the CI tools and helping to initially get the CI
> system reporting to the upstream Jenkins/Zuul system. That's it.
> No long-term maintenance, no long-term administration of the
> hardware in this lab environment. Just advice and setup help.

We already have a vibrant and active community around these tools
and concepts and I welcome any additional participation there.

> The vendors would continue to be responsible for keeping the CI
> jobs healthy and the lab environment up and running. It's just
> instead of 12 different external CI systems, there would be 1
> spawning jobs on lots of different types of hardware. I'm hoping
> that reducing the number of external CI systems will enable the
> vendors to jointly improve the quality of the tests because they
> will be able to focus on creating tests instead of keeping 12
> different CI systems up and running.
> 
> Hope that better explains the proposal.

It does. I'm glad to see the implications in the original proposal
of being partially staffed by the foundation turned out not to be
integral. Your call for a "single CI system" also seemed to imply
combining this and the upstream CI rather than simply combining
multiple third-party CI systems into a single third-party CI system,
which I now get is not actually the case. Thanks for clarifying!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Clark, Robert Graham
On 12/04/2016 18:37, "Jeremy Stanley"  wrote:



>On 2016-04-01 15:50:57 + (+), Hayes, Graham wrote:
>> If a team has already done a TA (e.g. as part of an internal
>> product TA) (and produced all the documentation) would this meet
>> the requirements?
>> 
>> I ask, as Designate looks like it meets nearly  all the current
>> requirements - the only outstanding question in my mind was the
>> Threat Analysis
>
>Seems fine to me, though in the interest of openness that
>documentation should probably be licensed such that it can be
>published somewhere for the whole community to read (once any
>glaring deficiencies are addressed anyway).
>-- 
>Jeremy Stanley

In some cases this may be feasible, in others less so. TA in general
tends to be implementation specific which is why, when discussing how
the Security Project would be performing TA work within OpenStack we
decided that it should be reflective of a best-practice deployment for
whatever project was being evaluated.[1][2]

There are two OpenStack vendors I know of that do in depth functional
threat analysis on OpenStack projects. I have been highly involved in
the development of TA at HPE and various colleagues in the Security
project have been involved with the TA process at Rackspace. When
evaluating our documentation sets together at the mid-cycle[2] it was
felt that in both cases, some degree of "normalization" would need to be
performed before either of us would be ready to share these documents
externally.

-Rob [Security]

[1] 
https://openstack-security.github.io/collaboration/2016/01/16/threat-analysis.html
[2] https://openstack-security.github.io/threatanalysis/2016/02/07/anchorTA.html
[3] 
https://openstack-security.github.io/mid-cycle/2016/01/15/mitaka-midcycle.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] [security][tc] Tidy up language in section 5 of the vulnerability:managed tag

2016-04-12 Thread Jeremy Stanley
On 2016-04-02 14:40:57 + (+), Steven Dake (stdake) wrote:
[...]
> IANAL and writing these things correctly is hard to do properly );
> involving the community around the pain points of the tagging
> process is what I'm after.
[...]

Nobody on the VMT is a lawyer either, and when I wrote the original
text I wanted to make sure it provided sufficient guidance on our
expectations while still being inclusive and without needing a
lawyer to interpret. As it stands, the VMT and TC still make the
final call on whether an application is sufficiently convincing, so
the point of the application criteria are to make sure projects know
what sort of convincing we're looking for.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Steven Dake (stdake)


On 4/12/16, 10:37 AM, "Jeremy Stanley"  wrote:

>On 2016-04-01 15:50:57 + (+), Hayes, Graham wrote:
>> If a team has already done a TA (e.g. as part of an internal
>> product TA) (and produced all the documentation) would this meet
>> the requirements?
>> 
>> I ask, as Designate looks like it meets nearly  all the current
>> requirements - the only outstanding question in my mind was the
>> Threat Analysis
>
>Seems fine to me, though in the interest of openness that
>documentation should probably be licensed such that it can be
>published somewhere for the whole community to read (once any
>glaring deficiencies are addressed anyway).
>-- 
>Jeremy Stanley

Perhaps the security team would take reviews to their documentation to add
the threat analysis documentation?

Regards,
-steve

>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Barrett, Carol L
Hi Folks - I'm looking into the options for Intel to host in Hillsboro Oregon. 
Stay tuned for more details.
Thanks
Carol

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Tuesday, April 12, 2016 9:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Newton midcycle planning



On 4/11/2016 5:54 PM, Michael Still wrote:
> On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann 
> > wrote:
>
> A few people have been asking about planning for the nova midcycle
> for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
> R-11 work the best. R-14 is close to the US July 4th holiday, R-13
> is during the week of the US July 4th holiday, and R-12 is the week
> of the n-2 milestone.
>
> R-16 is too close to the summit IMO, and R-10 is pushing it out too
> far in the release. I'd be open to R-14 though but don't know what
> other people's plans are.
>
> As far as a venue is concerned, I haven't heard any offers from
> companies to host yet. If no one brings it up by the summit, I'll
> see if hosting in Rochester, MN at the IBM site is a possibility.
>
>
> Intel at Hillsboro had expressed an interest in hosting the N 
> mid-cycle last release, so they might still be an option? I don't 
> recall any other possible hosts in the queue, but its possible I've missed 
> someone.
>
> Michael
>
> --
> Rackspace Australia
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Tracy Jones is also looking into whether or not VMware could host in Palo Alto 
again.

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Jeremy Stanley
On 2016-04-01 15:50:57 + (+), Hayes, Graham wrote:
> If a team has already done a TA (e.g. as part of an internal
> product TA) (and produced all the documentation) would this meet
> the requirements?
> 
> I ask, as Designate looks like it meets nearly  all the current
> requirements - the only outstanding question in my mind was the
> Threat Analysis

Seems fine to me, though in the interest of openness that
documentation should probably be licensed such that it can be
published somewhere for the whole community to read (once any
glaring deficiencies are addressed anyway).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newton design summit etherpad wiki created

2016-04-12 Thread Anita Kuno
On 04/12/2016 01:05 PM, Anita Kuno wrote:
> On 04/12/2016 12:45 PM, Matt Riedemann wrote:
>> I've stubbed out the newton design summit etherpad wiki so people can
>> start populating this now.
>>
>> https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads
>>
> Keep in mind we still haven't fully addressed all the consequences of
> the wiki spam event from February. If you have never before had an
> account on the wiki you won't be able to create one. If folks can
> partner up and help out others that are having wiki account issues that
> would be great.
> 
> We hope to continue to make progress on the wiki security issue and to
> discuss it at summit.
> 
> If folks get really stuck with the wiki, please talk to us in
> #openstack-infra.
> 
> Thanks Matt,
> Anita.
> 

It was an oversight on my part to not include the link to the discussion
on the spam event in my prior post:
http://lists.openstack.org/pipermail/openstack-infra/2016-February/003791.html

My apologies,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Jeremy Stanley
On 2016-03-31 15:15:23 -0400 (-0400), michael mccune wrote:
[...]
> * what is the process for performing an analysis
> 
> * how will an analysis be formally recognized and approved
> 
> * who will be doing these analyses

I intentionally didn't specify when writing the
vulnerability:managed tag description but instead only gave an
example, as the details of who can review a project and how will
vary depending on its scope, language, and so on. I was trying to
keep it vague enough to be applicable to all sorts of projects, but
I see now that lack of specificity is leading to additional
confusion (which makes me fear we'll be forced instead to encode
every possible solution in our tag description).

> * does it make sense to keep the analysis process strictly limited
> to the vmt
[...]

Not at all. Providing security feedback to projects seems beneficial
regardless of whether they're going to have vulnerability reporting
overseen by the OpenStack VMT or plan to handle that on their own.
On the other hand, some volunteers may choose to limit their
assistance to projects applying for the vulnerability:managed tag as
a means of keeping from getting spread too thin.

> ultimately, having a third-party review of a project is worthy
> goal, but this has to be tempered with the reality that a single
> team will not be able to scale out and provide thorough analyses
> for all projects. to that extent, the ossp should work, initially,
> to help a few teams get these analyses completed and in the
> process create a set of useful tools (docs, guides, diagrams,
> foil-hat wearing pamphlets) to help further the effort.
> 
> i would like to propose that the threat analysis portion of the
> vulnerability:managed tag be modified with the goal of having the
> project teams create their own analyses, with an extended
> third-party review to be performed afterwards. in this respect,
> the scale issue can be addressed, as well as the issue of project
> domain knowledge. it makes much more sense to me to have the
> project team creating the initial work here as they will know the
> areas, and architectures, that will need the most attention.
[...]

This seems fine. The issue mostly boils down to the fact that the
VMT used to perform a cursory security review (skim really) of a
project's source code checking for obvious smells to indicate that
it might not be in a mature enough state to cover officially for
vulnerability reporting oversight without creating a ton of
additional work for ourselves the first time someone pointed an
analyzer at it. As a means of scaling the VMT's capacity without
scaling the size of the VMT, in an effort to handle the "big tent"
model, this sanity check seemed like something we could easily
delegate to other interested volunteers to reduce our own workload
and help us cover more and a wider variety of projects.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Telco Working Group meeting for Wednesday April 6th CANCELLED

2016-04-12 Thread Shamail


> On Apr 12, 2016, at 1:12 PM, Steve Gordon  wrote:
> 
> 
> 
> - Original Message -
>> From: "Calum Loudon" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> ,
>> "openstack-operators" 
>> Sent: Wednesday, April 6, 2016 6:09:16 AM
>> Subject: Re: [openstack-dev] Telco Working Group meeting for Wednesday April 
>>6th CANCELLED
>> 
>> Thanks Steve
>> 
>> I agree with moving to the PWG.
>> 
>> On that topic, do you know what's happened to some of the user stories we
>> proposed, specifically https://review.openstack.org/#/c/290060/ and
>> https://review.openstack.org/#/c/290347/? Neither shows up in
>> https://review.openstack.org/#/q/status:open+project:openstack/openstack-user-stories
> 
> This query includes status:open, and those two reviews were merged already so 
> they don't show up.
> 
>> but there is a https://review.openstack.org/#/c/290991/ which seems to be a
>> copy of https://review.openstack.org/#/c/290060/ with the template help text
>> added back in and no mention of the original?
> 
> From Shamail's comment in 290991:
> 
>This change should be used to discuss and refine the concept. Can the user 
> story owner please make a minor change to show ownership?
> 
> Basically they opened new reviews with a minor change to trigger further 
> discussion. I'm not in love with this approach versus just discussing it on 
> the original move request but it is the way it is being done for now. W.r.t. 
> 290060 I believe you probably meant to include another link but I imagine the 
> situation is the same.
Yeah, unfortunately, this approach was needed when we changed the workflow.  A 
minor change would be recommended for now.  

The template recently changed and you could update the story to the new 
template (if it isn't already updated) and that would suffice.
> 
> -Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-12 Thread Hongbin Lu
Hi all,

We discussed this in our last team meeting, and we were in disagreement. Some 
of us preferred option #1, others preferred option #2. I would suggest to leave 
this topic to the design summit so that our team members would have more times 
to research each option. If we are in disagreement again, I will let the core 
team to vote (hopefully we will have all the core team in the design summit).

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: April-11-16 4:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

+1 for "#1: Mesos and Marathon". Most deployments that I am aware of has this 
setup. Also we can provide several line instructions how to run Chronos on top 
of Marathon.

honestly I don't see how #2 will work, because Marathon installation is 
different from Aurora installation.

---
Egor


From: Kai Qiang Wu >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Sunday, April 10, 2016 6:59 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

#2 seems more flexible, and if it be proved it can "make the SAME mesos bay 
applied with mutilple frameworks." It would be great. Which means, one mesos 
bay should support multiple frameworks.




Thanks


Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Hongbin Lu ---11/04/2016 12:06:07 am---My preference is #1, but I don’t feel 
strong to exclude #2. I would agree to go with #2 for now and

From: Hongbin Lu >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 11/04/2016 12:06 am
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay




My preference is #1, but I don’t feel strong to exclude #2. I would agree to go 
with #2 for now and switch back to #1 if there is a demand from users. For 
Ton’s suggestion to push Marathon into the introduced configuration hook, I 
think it is a good idea.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
I would agree that #2 is the most flexible option, providing a well defined 
path for additional frameworks such as Kubernetes and Swarm.
I would suggest that the current Marathon framework be refactored to use this 
new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other 
frameworks but not Marathon.
Ton,

Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
> wrote:

From: Adrian Otto >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay


On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
> wrote:

Hi team,
I would like to give an update for this thread. In the last team, we discussed 
several options to introduce Chronos to our mesos bay:
1. Add Chronos to the mesos bay. With this option, the mesos bay will have two 
mesos frameworks by default (Marathon and Chronos).
2. Add a configuration hook for users to configure additional mesos frameworks, 
such as Chronos. With this option, Magnum team doesn’t need to maintain extra 
framework configuration. However, users need to do it themselves.
This is my preference.

Adrian
3. Create a dedicated bay type for Chronos. With this option, we separate 
Marathon and Chronos into two different bay types. As a result, each bay type 
becomes easier to maintain, but those two mesos framework cannot share 
resources (a key feature of mesos is to have different frameworks running on 
the same cluster to increase resource utilization).Which option you prefer? Or 
you have other suggestions? Advices are welcome.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-28-16 12:19 AM
To: 

Re: [openstack-dev] Telco Working Group meeting for Wednesday April 6th CANCELLED

2016-04-12 Thread Steve Gordon


- Original Message -
> From: "Calum Loudon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> ,
> "openstack-operators" 
> Sent: Wednesday, April 6, 2016 6:09:16 AM
> Subject: Re: [openstack-dev] Telco Working Group meeting for Wednesday April  
> 6th CANCELLED
> 
> Thanks Steve
> 
> I agree with moving to the PWG.
> 
> On that topic, do you know what's happened to some of the user stories we
> proposed, specifically https://review.openstack.org/#/c/290060/ and
> https://review.openstack.org/#/c/290347/?  Neither shows up in
> https://review.openstack.org/#/q/status:open+project:openstack/openstack-user-stories

This query includes status:open, and those two reviews were merged already so 
they don't show up.

> but there is a https://review.openstack.org/#/c/290991/ which seems to be a
> copy of https://review.openstack.org/#/c/290060/ with the template help text
> added back in and no mention of the original?

>From Shamail's comment in 290991:

This change should be used to discuss and refine the concept. Can the user 
story owner please make a minor change to show ownership?

Basically they opened new reviews with a minor change to trigger further 
discussion. I'm not in love with this approach versus just discussing it on the 
original move request but it is the way it is being done for now. W.r.t. 290060 
I believe you probably meant to include another link but I imagine the 
situation is the same.

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam status report

2016-04-12 Thread Ruby Loo
Hi,

We are nerdy to present this week's subteam report for Ironic. As usual,
this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 04.04.2016):
- Ironic: 200 bugs (+4) + 163 wishlist items. 25 new, 136 in progress (+1),
0 critical, 26 high and 18 incomplete (+2)
- Inspector: 13 bugs (-1) + 15 wishlist items (-1). 1 new, 6 in progress
(-1), 0 critical, 4 high and 0 incomplete (-1)
- Nova bugs with Ironic tag: 16 (+1). 1 new (+1), 0 critical, 0 high

Network isolation (Neutron/Ironic work) (jroll)
===
- Tests are green, needs core love
- The main meat:
- https://review.openstack.org/#/c/285852/
- (deva) some concerns on p31, brought up in ironic-neutron meeting
this morning. needs another rev to fix.
- https://review.openstack.org/#/c/206244/
- https://review.openstack.org/#/c/206144
- https://review.openstack.org/#/c/213262/

Live upgrades (lucasagomes, lintan)
===
- Propose a spec to discuss:
- https://review.openstack.org/#/c/299245/

Node filter API and claims endpoint (jroll, devananda, lucasagomes)
===
- expect an update on this spec before summit (hopefully this week)

Oslo (lintan)
=
- In order to make use oslo-config-generator, we should make ironic-lib
explore the options too.
- Patch is landed :https://review.openstack.org/#/c/297549/ so we need
to release a new version of ironic-lib
- release request: https://review.openstack.org/#/c/304229/

Testing/Quality (jlvillal/krtaylor)
===
- Grenade: No update this week.

Inspector (dtansur)
===
- Reprocessing stored node introspection data API merged, inspector client
patch in review

webclient (krotscheck / betherly)
=
- v1.1 has been released.

Drivers:

CIMC and UCSM (sambetts)

- Both CI down for 1hr because of HW upgrade, then should be back up and
testing.
- UCSM driver needs to prove itself then will be made voting, but seems to
be getting good results after a race condition was fixed on Monday morning.
.

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newton design summit etherpad wiki created

2016-04-12 Thread Anita Kuno
On 04/12/2016 12:45 PM, Matt Riedemann wrote:
> I've stubbed out the newton design summit etherpad wiki so people can
> start populating this now.
> 
> https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads
> 
Keep in mind we still haven't fully addressed all the consequences of
the wiki spam event from February. If you have never before had an
account on the wiki you won't be able to create one. If folks can
partner up and help out others that are having wiki account issues that
would be great.

We hope to continue to make progress on the wiki security issue and to
discuss it at summit.

If folks get really stuck with the wiki, please talk to us in
#openstack-infra.

Thanks Matt,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Newton design summit etherpad wiki created

2016-04-12 Thread Matt Riedemann
I've stubbed out the newton design summit etherpad wiki so people can 
start populating this now.


https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-12 Thread Sławek Kapłoński
Hello,

It's nova-compute service which is configuring it. This service is running on 
compute node: http://docs.openstack.org/developer/nova/architecture.html

-- 
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia wtorek, 12 kwietnia 2016 16:16:25 CEST 张晨 pisze:
> Thx for the answer, i finally locate the implementation in
> nova.network.linux_net.create_ovs_vif_port
> 
> 
> but how could nova execute the ovs-vsctl for the compute-node hypervisor
> just in the control-node?
> At 2016-04-12 13:13:48, "Sławek Kapłoński"  wrote:
> >Hello,
> >
> >I don't know this ODL and how it works but for ovs-agent nova-compute is
> >part which adds port to ovs bridge (see for example
> >nova/virt/libvirt/vif.py)>
> >> Hello everyone,
> >> 
> >> 
> >> I have a question about Neutron. I learn that the ovs-agent receives the
> >> update-port rpc notification,and updates ovsdb data for VM port.
> >> 
> >> 
> >> But what is the situation when i use SDN controllers instead of OVS
> >> mechanism driver? I found no where in ODL to add the VM port to ovs.
> >> 
> >> 
> >> I asked the author of the related ODL plugin, but he told me that
> >> OpenStack
> >> adds the VM port to ovs.
> >> 
> >> 
> >> Then, where is the implementation in OpenStack to  add the VM port to
> >> ovs,
> >> when i'm using ODL replacing the OVSmechanism driver?
> >> 
> >> 
> >> Thanks

signature.asc
Description: This is a digitally signed message part.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-12 Thread Ryan Hallisey
I'm definitely guilty of nit picking docs.  I'm fine with holding back and 
following up with a patch to fix any grammar/misspelling mistakes. +1

-Ryan  

- Original Message -
From: "Paul Bourke" 
To: openstack-dev@lists.openstack.org
Sent: Tuesday, April 12, 2016 4:49:09 AM
Subject: Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

I've said in the past I'm not a fan of nitpicking docs. That said, I 
feel it's important for spelling and grammar to be correct. The 
quickstart guide is the first point of contact for many people to the 
project, and rightly or wrongly it will give an overall impression of 
the quality of the project.

When I review doc changes I try not to nitpick on the process being 
described - e.g. if an otherwise fine patch is already 5 iterations in 
and the example given to configure a service could be done in 3 lines 
less bash, I'll usually comment but still +2. If on the otherhand it is 
rife with typos (which by the way is easily solved with a spellchecker), 
and reads really badly I will flag it.

-Paul

On 11/04/16 19:27, Steven Dake (stdake) wrote:
> My proposal was for docs-only patches not code contributions with docs.
> Obviously we want a high bar for code contributions.  This is part of the
> reason we have the DocImpact flag (for folks that don't feel comfortable
> writing documentation because perhaps of ESL, or other reasons).
>
> We already have a way to decouple code from docs with DocImpact.
>
> Regards
> -steve
>
> On 4/11/16, 6:17 AM, "Michał Jastrzębski"  wrote:
>
>> So one way to approach it is to decouple docs from code, make it 2
>> reviews. We can -1 code without docs and ask to create separate
>> patchset depending on one in question with docs. Then we can nitpick
>> all we want:) new contributor will get his/hers code merged, at least
>> one patchset, so it will work better on morale, and we'll be able to
>> keep high bar for QSG and other docs. There is possibility that author
>> will leave docs patch after code merge, but well, we can take over
>> docs review.
>>
>> What do you think guys? I'd really like to keep high quality standard
>> all the way and don't scare off new commiters at the same time.
>>
>> Cheers,
>> Michal
>>
>> On 11 April 2016 at 03:50, Steven Dake (stdake)  wrote:
>>>
>>>
>>> On 4/11/16, 1:38 AM, "Gerard Braad"  wrote:
>>>
 Hi,

 On Mon, Apr 11, 2016 at 4:20 PM, Steven Dake (stdake) 
 wrote:
> On 4/11/16, 12:54 AM, "Gerard Braad"  wrote:
> as
>> at the moment getting an environment up-and-running according to the
>> quickstart guide is a hit and miss
> I don't think deployment is not hit or miss as long as the QSG are
> followed to a T :)

 Maybe saying "at the moment" was incorrect. As the deployment
 according to the QSG has been a few weeks ago. Sorry about this... as
 you guys have put a lot of effort into it recently.


> I agree we need more clarity in what belongs in the QSG.
 This can be a separate discussion (Not intending to hijack this thread).


 I am not a core reviewer, but I keep it as-is. I do not see a need for
>>>
>>> Even though your not a core reviewer, your comments are valued.  The
>>> reason I addressed core reviewers specifically as they have +2
>>> permissions
>>> and I would like more leniency on new documentation in other files
>>> outside
>>> those listed above (philosophy document, QSG) with a pubic statement of
>>> such.
>>>
 a lower-bar. Although, documentation is the entry-point into a
 community (as user and potential contributor) and therefore it should
 be of a high quality. Maybe I would be to provide more suggestions
 instead of just indication of 'change this for that'.
>>>
>>> The issue I see with our QSG is it has the highest bar for review
>>> passage
>>> of any file in the repository.  Any QSG change typically requires 10 or
>>> more patch sets to make it through the core reviewer gauntlet.  This
>>> discourages people from writing new documentation.  I don't want this to
>>> carry over into other parts of the documentation that are as of yet
>>> unwritten.  I'd like new documentation to be ok with misspellings,
>>> grammar
>>> errors, formatting problems, ESL authors, and that sort of thing.
>>>
>>> The QSG should tolerate none of these types of errors at this point - it
>>> must be absolutely perfect (at least in English:) as to not cause
>>> confusion to new operators.
>>>
>>> Regards
>>> -steve
>>>

 regards,


 Gerard

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>

Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Matt Riedemann



On 4/11/2016 5:54 PM, Michael Still wrote:

On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann
> wrote:

A few people have been asking about planning for the nova midcycle
for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
R-11 work the best. R-14 is close to the US July 4th holiday, R-13
is during the week of the US July 4th holiday, and R-12 is the week
of the n-2 milestone.

R-16 is too close to the summit IMO, and R-10 is pushing it out too
far in the release. I'd be open to R-14 though but don't know what
other people's plans are.

As far as a venue is concerned, I haven't heard any offers from
companies to host yet. If no one brings it up by the summit, I'll
see if hosting in Rochester, MN at the IBM site is a possibility.


Intel at Hillsboro had expressed an interest in hosting the N mid-cycle
last release, so they might still be an option? I don't recall any other
possible hosts in the queue, but its possible I've missed someone.

Michael

--
Rackspace Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Tracy Jones is also looking into whether or not VMware could host in 
Palo Alto again.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-12 Thread Dina Belova
Thank you Morgan for quick fixes proposed!

On Tue, Apr 12, 2016 at 6:19 PM, Morgan Fainberg 
wrote:

> Sorry Missed the copy/paste of links:
>
> https://bugs.launchpad.net/keystone/+bug/1567403 [0]
> https://bugs.launchpad.net/keystone/+bug/1567413 [1]
>
> [0]
> https://review.openstack.org/#/q/I4857cfe1e62d54c3c89a0206ffc895c4cf681ce5,n,z
> [1] https://review.openstack.org/#/c/304688/
>
> --Morgan
>
> On Tue, Apr 12, 2016 at 8:16 AM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>> Fixes have been proposed for both of these bugs.
>>
>> Cheers,
>> --Morgan
>>
>> On Tue, Apr 12, 2016 at 12:38 AM, Dina Belova 
>> wrote:
>>
>>> Matt,
>>>
>>> Thanks for sharing the information about your benchmark. Indeed we need
>>> to follow up on this topic (I'll attend the summit). Let's try to collect
>>> as much information as possible prior Austin to have more facts to operate.
>>> I'll try to figure out why local context cache did not work at least on my
>>> environment, and, due to the results, most probably it did not act as
>>> supposed during your benchmarking as well.
>>>
>>> Cheers,
>>> Dina
>>>
>>> On Mon, Apr 11, 2016 at 10:57 PM, Matt Fischer 
>>> wrote:
>>>
 On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova 
 wrote:

> Hey, openstackers!
>
> Recently I was trying to profile Keystone (OpenStack Liberty vs
> Mitaka) using this set of changes
> 
>  (that's
> currently on review - some final steps are required there to finish the
> work) and OSprofiler.
>
> Some preliminary results (all in one OpenStack node) can be found here
> 
>  (raw
> OSprofiler reports are not yet merged to some place and can be found
> here ). The full plan
> 
>  of
> what's going to be tested  can be found in the docs as well. In short I
> wanted to take a look how does Keystone changed its DB/Cache usage from
> Liberty to Mitaka, keeping in mind that there were several changes
> introduced:
>
>- federation support was added (and made DB scheme a bit more
>complex)
>- Keystone moved to oslo.cache usage
>- local context cache was introduced during Mitaka
>
> First of all - *good job on making Keystone less DB-extensive in case
> of cache turned on*! If Keystone caching is turned on, number of DB
> queries done to Keystone DB in Mitaka is averagely twice less than in
> Liberty, comparing the same requests and topologies. Thanks Keystone
> community to make it happen :)
>
> Although, I faced *two strange issues* during my experiments, and I'm
> kindly asking you, folks, to help me here:
>
>- I've created #1567403
> bug to share
>information - when I turned caching on, local context cache should 
> cache
>identical per API requests function calls not to ping Memcache too 
> often.
>Although I faced such calls, Keystone still used Memcache to gather 
> this
>information. May someone take a look on this and help me figure out 
> what am
>I observing? At the first sight local context cache should work ok, 
> but for
>some reason I do not see it's being used.
>- One more filed bug - #1567413
> - is about a
>bit opposite situation :) When I turned cache off explicitly in the
>keystone.conf file, I still observed some of the values being fetched 
> from
>Memcache... Your help is very appreciated!
>
> Thanks in advance and sorry for a long email :)
>
> Cheers,
> Dina
>
>
 Dina,

 Thanks for starting this conversation. I had some weird perf results
 comparing L to an RC release of Mitaka, but I was holding them until
 someone else confirmed what I saw. I'm testing token creation and
 validation. From what I saw, token validation slowed down in Mitaka. After
 doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
 it was in Liberty. That implies more caching but 8x is a lot and even
 memcache references are not free.

 I know some of the Keystone folks are looking into this so it will be
 good to follow-up on it. Maybe we could talk about this at the summit?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Fox, Kevin M
Flavio, and whom'ever else who's available to attend,

We have a summit session for instance users listed here that could be part of 
the solution:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9485

Please attend if you can.
--

+1 for a basic common abstraction. The app catalog could really use it too. 
We'd like to be able to host container orchestration templates and hand them 
over to Magnum for launching.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Tuesday, April 12, 2016 5:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

On 11/04/16 16:53 +, Adrian Otto wrote:
>Amrith,
>
>I respect your point of view, and agree that the idea of a common compute API
>is attractive… until you think a bit deeper about what that would mean. We
>seriously considered a “global” compute API at the time we were first
>contemplating Magnum. However, what we came to learn through the journey of
>understanding the details of how such a thing would be implemented, that such
>an API would either be (1) the lowest common denominator (LCD) of all compute
>types, or (2) an exceedingly complex interface.
>
>You expressed a sentiment below that trying to offer choices for VM, Bare Metal
>(BM), and Containers for Trove instances “adds considerable complexity”.
>Roughly the same complexity would accompany the use of a comprehensive compute
>API. I suppose you were imagining an LCD approach. If that’s what you want,
>just use the existing Nova API, and load different compute drivers on different
>host aggregates. A single Nova client can produce VM, BM (Ironic), and
>Container (lbvirt-lxc) instances all with a common API (Nova) if it’s
>configured in this way. That’s what we do. Flavors determine which compute type
>you get.
>
>If what you meant is that you could tap into the power of all the unique
>characteristics of each of the various compute types (through some modular
>extensibility framework) you’ll likely end up with complexity in Trove that is
>comparable to integrating with the native upstream APIs, along with the
>disadvantage of waiting for OpenStack to continually catch up to the pace of
>change of the various upstream systems on which it depends. This is a recipe
>for disappointment.
>
>We concluded that wrapping native APIs is a mistake, particularly when they are
>sufficiently different than what the Nova API already offers. Containers APIs
>have limited similarities, so when you try to make a universal interface to all
>of them, you end up with a really complicated mess. It would be even worse if
>we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s
>approach is to offer the upstream native API’s for the different container
>orchestration engines (COE), and compose Bays for them to run on that are built
>from the compute types that OpenStack supports. We do this by using different
>Heat orchestration templates (and conditional templates) to arrange a COE on
>the compute type of your choice. With that said, there are still gaps where not
>all storage or network drivers work with Ironic, and there are non-trivial
>security hurdles to clear to safely use Bays composed of libvirt-lxc instances
>in a multi-tenant environment.
>
>My suggestion to get what you want for Trove is to see if the cloud has Magnum,
>and if it does, create a bay with the flavor type specified for whatever
>compute type you want, and then use the native API for the COE you selected for
>that bay. Start your instance on the COE, just like you use Nova today. This
>way, you have low complexity in Trove, and you can scale both the number of
>instances of your data nodes (containers), and the infrastructure on which they
>run (Nova instances).


I've been researching on this area and I've reached pretty much the same
conclusion. I've had moments of wondering whether creating bays is something
Trove should do but I now think it should.

The need of handling the native API is the part I find a bit painful as that
means more code needs to happen in Trove for us to provide this provisioning
facilities. I wonder if a common *library* would help here, at least to handle
those "simple" cases. Anyway, I look forward to chatting with you all about 
this.

It'd be great if you (and other magnum folks) could join this session:

https://etherpad.openstack.org/p/trove-newton-summit-container

Thanks for chiming in, Adrian.
Flavio

>Regards,
>
>Adrian
>
>
>
>
>On Apr 11, 2016, at 8:47 AM, Amrith Kumar  wrote:
>
>Monty, Dims,
>
>I read the notes and was similarly intrigued about the idea. In particular,
>from the perspective of projects like Trove, having a common Compute API is
>very valuable. It would allow the 

Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-12 Thread Ruby Loo
Yes, I think it would be good to have a summit session on that. However,
before the session, it would really be helpful if the folks with proposals
got together and/or reviewed each other's proposals, and summarized their
findings. You may find after reviewing the proposals, that eg only 2 are
really different. Or they several have merit because they are addressing
slightly different issues. That would make it easier to
present/discuss/decide at the session.

--ruby


On 12 April 2016 at 09:17, Jim Rollenhagen  wrote:

> On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
> > Maybe we can continue the discussion here, as there's no enough time in
> the
> > irc meeting :)
>
> Someone mentioned this would make a good summit session, as there's a
> few competing proposals that are all good options. I do welcome
> discussion here until then, but I'm going to put it on the schedule. :)
>
> // jim
>
> >
> > On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu 
> wrote:
> >
> > >
> > > Ironic is currently using shellinabox to provide a serial console, but
> > > it's not compatible
> > > with nova, so I would like to propose a new console type and a custom
> HTTP
> > > proxy [1]
> > > which validate token and connect to ironic console from nova.
> > >
> > > On Horizon side, we should add support for the new console type [2] as
> > > well, here are some screenshots from my local environment.
> > >
> > >
> > >
> > > ​
> > >
> > > Additionally, shellinabox console ports management should be improved
> in
> > > ironic, instead of manually specified, we should introduce dynamically
> > > allocation/deallocation [3] mechanism.
> > >
> > > Functionality is being implemented in Nova, Horizon and Ironic:
> > > https://review.openstack.org/#/q/topic:bp/shellinabox-http-proxy
> > > https://review.openstack.org/#/q/topic:bp/ironic-shellinabox-console
> > > https://review.openstack.org/#/q/status:open+topic:bug/1526371
> > >
> > >
> > > PS: to achieve this goal, we can also add a new console driver in
> ironic
> > > [4], but I think it doesn't conflict with this, as shellinabox is
> capable
> > > to integrate with nova, and we should support all console drivers.
> > >
> > >
> > > [1] https://blueprints.launchpad.net/nova/+spec/shellinabox-http-proxy
> > > [2]
> > >
> https://blueprints.launchpad.net/horizon/+spec/ironic-shellinabox-console
> > > [3] https://bugs.launchpad.net/ironic/+bug/1526371
> > > [4] https://bugs.launchpad.net/ironic/+bug/1553083
> > >
> > > --
> > > Best Regards,
> > > Zhenguo Niu
> > >
> >
> >
> >
> > --
> > Best Regards,
> > Zhenguo Niu
>
>
>
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-12 Thread Fox, Kevin M
Have a look at this script:
https://review.openstack.org/#/c/158003/1/bin/neutron-local-port

Nova does basically the same thing but plugs it into the vm.

Thanks,
Kevin


From: 张晨 [zhangchen9...@126.com]
Sent: Tuesday, April 12, 2016 1:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

Thx for the answer, i finally locate the implementation in 
nova.network.linux_net.create_ovs_vif_port

but how could nova execute the ovs-vsctl for the compute-node hypervisor just 
in the control-node?






At 2016-04-12 13:13:48, "Sławek Kapłoński"  wrote:
>Hello,
>
>I don't know this ODL and how it works but for ovs-agent nova-compute is part
>which adds port to ovs bridge (see for example nova/virt/libvirt/vif.py)
>
>--
>Pozdrawiam / Best regards
>Sławek Kapłoński
>sla...@kaplonski.pl
>
>Dnia wtorek, 12 kwietnia 2016 12:31:01 CEST 张晨 pisze:
>> Hello everyone,
>>
>>
>> I have a question about Neutron. I learn that the ovs-agent receives the
>> update-port rpc notification,and updates ovsdb data for VM port.
>>
>>
>> But what is the situation when i use SDN controllers instead of OVS
>> mechanism driver? I found no where in ODL to add the VM port to ovs.
>>
>>
>> I asked the author of the related ODL plugin, but he told me that OpenStack
>> adds the VM port to ovs.
>>
>>
>> Then, where is the implementation in OpenStack to  add the VM port to ovs,
>> when i'm using ODL replacing the OVSmechanism driver?
>>
>>
>> Thanks





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #78

2016-04-12 Thread Emilien Macchi
On Mon, Apr 11, 2016 at 10:19 AM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi,
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.
>
> https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack
>
> As usual, free free to bring topics in this etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160412
>
> We'll also have open discussion for bugs & reviews, so anyone is welcome
> to join.

We did our meeting, notes are available here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-04-12-15.00.html

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Duncan Thomas
HP facility just outside Dublin (ireland) is available again, depending on
dates

On 12 April 2016 at 17:05, Sean McGinnis  wrote:

> Hey Cinder team (and those interested),
>
> We've had a few informal conversations on the channel and in meetings,
> but wanted to capture some things here and spread awareness.
>
> I think it would be good to start planning for our Newton midcycle.
> These have been incredibly productive in the past (at least in my
> opinion) so I'd like to get it on the schedule so folks can start
> planning for it.
>
> For Mitaka we held our midcycle in the R-10 week. That seemed to work
> out pretty well, but I also think it might be useful to hold it a little
> earlier in the cycle to keep some momentum going and make sure things
> stay pretty focused for the rest of the cycle.
>
> For reference, here is the current release schedule for Newton:
>
> http://releases.openstack.org/newton/schedule.html
>
> R-10 puts us in the last week of July.
>
> I would have a conflict R-16, R-15. We probably want to avoid US
> Independence Day R-13, and milestone weeks R-18 and R12.
>
> So potential weeks look like:
>
> * R-17
> * R-14
> * R-11
> * R-10
> * R-9
>
> Nova is in the process of figuring out their date. If we have that, it
> would be good to try to avoid an overlap there. Our linked midcycle
> session worked out well, but probably better if they don't conflict.
>
> We also need to work out locations. Anyone able and willing to host,
> just let me know. We need a facility with wifi, able to hold ~30-40
> people, wifi, close to an airport. And wifi.
>
> At some point I still think it would be nice for our international folks
> to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
> Collins our somewhere similar.
>
> Thanks!
>
> Sean (smcginnis)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-12 Thread Morgan Fainberg
Sorry Missed the copy/paste of links:

https://bugs.launchpad.net/keystone/+bug/1567403 [0]
https://bugs.launchpad.net/keystone/+bug/1567413 [1]

[0]
https://review.openstack.org/#/q/I4857cfe1e62d54c3c89a0206ffc895c4cf681ce5,n,z
[1] https://review.openstack.org/#/c/304688/

--Morgan

On Tue, Apr 12, 2016 at 8:16 AM, Morgan Fainberg 
wrote:

> Fixes have been proposed for both of these bugs.
>
> Cheers,
> --Morgan
>
> On Tue, Apr 12, 2016 at 12:38 AM, Dina Belova 
> wrote:
>
>> Matt,
>>
>> Thanks for sharing the information about your benchmark. Indeed we need
>> to follow up on this topic (I'll attend the summit). Let's try to collect
>> as much information as possible prior Austin to have more facts to operate.
>> I'll try to figure out why local context cache did not work at least on my
>> environment, and, due to the results, most probably it did not act as
>> supposed during your benchmarking as well.
>>
>> Cheers,
>> Dina
>>
>> On Mon, Apr 11, 2016 at 10:57 PM, Matt Fischer 
>> wrote:
>>
>>> On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova 
>>> wrote:
>>>
 Hey, openstackers!

 Recently I was trying to profile Keystone (OpenStack Liberty vs Mitaka)
 using this set of changes
 
  (that's
 currently on review - some final steps are required there to finish the
 work) and OSprofiler.

 Some preliminary results (all in one OpenStack node) can be found here
 
  (raw
 OSprofiler reports are not yet merged to some place and can be found
 here ). The full plan
 
  of
 what's going to be tested  can be found in the docs as well. In short I
 wanted to take a look how does Keystone changed its DB/Cache usage from
 Liberty to Mitaka, keeping in mind that there were several changes
 introduced:

- federation support was added (and made DB scheme a bit more
complex)
- Keystone moved to oslo.cache usage
- local context cache was introduced during Mitaka

 First of all - *good job on making Keystone less DB-extensive in case
 of cache turned on*! If Keystone caching is turned on, number of DB
 queries done to Keystone DB in Mitaka is averagely twice less than in
 Liberty, comparing the same requests and topologies. Thanks Keystone
 community to make it happen :)

 Although, I faced *two strange issues* during my experiments, and I'm
 kindly asking you, folks, to help me here:

- I've created #1567403
 bug to share
information - when I turned caching on, local context cache should cache
identical per API requests function calls not to ping Memcache too 
 often.
Although I faced such calls, Keystone still used Memcache to gather this
information. May someone take a look on this and help me figure out 
 what am
I observing? At the first sight local context cache should work ok, but 
 for
some reason I do not see it's being used.
- One more filed bug - #1567413
 - is about a bit
opposite situation :) When I turned cache off explicitly in the
keystone.conf file, I still observed some of the values being fetched 
 from
Memcache... Your help is very appreciated!

 Thanks in advance and sorry for a long email :)

 Cheers,
 Dina


>>> Dina,
>>>
>>> Thanks for starting this conversation. I had some weird perf results
>>> comparing L to an RC release of Mitaka, but I was holding them until
>>> someone else confirmed what I saw. I'm testing token creation and
>>> validation. From what I saw, token validation slowed down in Mitaka. After
>>> doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
>>> it was in Liberty. That implies more caching but 8x is a lot and even
>>> memcache references are not free.
>>>
>>> I know some of the Keystone folks are looking into this so it will be
>>> good to follow-up on it. Maybe we could talk about this at the summit?
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>>
>> Dina Belova
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>> 

Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-12 Thread Morgan Fainberg
Fixes have been proposed for both of these bugs.

Cheers,
--Morgan

On Tue, Apr 12, 2016 at 12:38 AM, Dina Belova  wrote:

> Matt,
>
> Thanks for sharing the information about your benchmark. Indeed we need to
> follow up on this topic (I'll attend the summit). Let's try to collect as
> much information as possible prior Austin to have more facts to operate.
> I'll try to figure out why local context cache did not work at least on my
> environment, and, due to the results, most probably it did not act as
> supposed during your benchmarking as well.
>
> Cheers,
> Dina
>
> On Mon, Apr 11, 2016 at 10:57 PM, Matt Fischer 
> wrote:
>
>> On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova 
>> wrote:
>>
>>> Hey, openstackers!
>>>
>>> Recently I was trying to profile Keystone (OpenStack Liberty vs Mitaka)
>>> using this set of changes
>>> 
>>>  (that's
>>> currently on review - some final steps are required there to finish the
>>> work) and OSprofiler.
>>>
>>> Some preliminary results (all in one OpenStack node) can be found here
>>> 
>>>  (raw
>>> OSprofiler reports are not yet merged to some place and can be found
>>> here ). The full plan
>>> 
>>>  of
>>> what's going to be tested  can be found in the docs as well. In short I
>>> wanted to take a look how does Keystone changed its DB/Cache usage from
>>> Liberty to Mitaka, keeping in mind that there were several changes
>>> introduced:
>>>
>>>- federation support was added (and made DB scheme a bit more
>>>complex)
>>>- Keystone moved to oslo.cache usage
>>>- local context cache was introduced during Mitaka
>>>
>>> First of all - *good job on making Keystone less DB-extensive in case
>>> of cache turned on*! If Keystone caching is turned on, number of DB
>>> queries done to Keystone DB in Mitaka is averagely twice less than in
>>> Liberty, comparing the same requests and topologies. Thanks Keystone
>>> community to make it happen :)
>>>
>>> Although, I faced *two strange issues* during my experiments, and I'm
>>> kindly asking you, folks, to help me here:
>>>
>>>- I've created #1567403
>>> bug to share
>>>information - when I turned caching on, local context cache should cache
>>>identical per API requests function calls not to ping Memcache too often.
>>>Although I faced such calls, Keystone still used Memcache to gather this
>>>information. May someone take a look on this and help me figure out what 
>>> am
>>>I observing? At the first sight local context cache should work ok, but 
>>> for
>>>some reason I do not see it's being used.
>>>- One more filed bug - #1567413
>>> - is about a bit
>>>opposite situation :) When I turned cache off explicitly in the
>>>keystone.conf file, I still observed some of the values being fetched 
>>> from
>>>Memcache... Your help is very appreciated!
>>>
>>> Thanks in advance and sorry for a long email :)
>>>
>>> Cheers,
>>> Dina
>>>
>>>
>> Dina,
>>
>> Thanks for starting this conversation. I had some weird perf results
>> comparing L to an RC release of Mitaka, but I was holding them until
>> someone else confirmed what I saw. I'm testing token creation and
>> validation. From what I saw, token validation slowed down in Mitaka. After
>> doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
>> it was in Liberty. That implies more caching but 8x is a lot and even
>> memcache references are not free.
>>
>> I know some of the Keystone folks are looking into this so it will be
>> good to follow-up on it. Maybe we could talk about this at the summit?
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Anita Kuno
On 04/12/2016 10:05 AM, Sean McGinnis wrote:
> Hey Cinder team (and those interested),
> 
> We've had a few informal conversations on the channel and in meetings,
> but wanted to capture some things here and spread awareness.
> 
> I think it would be good to start planning for our Newton midcycle.
> These have been incredibly productive in the past (at least in my
> opinion) so I'd like to get it on the schedule so folks can start
> planning for it.
> 
> For Mitaka we held our midcycle in the R-10 week. That seemed to work
> out pretty well, but I also think it might be useful to hold it a little
> earlier in the cycle to keep some momentum going and make sure things
> stay pretty focused for the rest of the cycle.
> 
> For reference, here is the current release schedule for Newton:
> 
> http://releases.openstack.org/newton/schedule.html
> 
> R-10 puts us in the last week of July.
> 
> I would have a conflict R-16, R-15. We probably want to avoid US
> Independence Day R-13, and milestone weeks R-18 and R12.
> 
> So potential weeks look like:
> 
> * R-17
> * R-14
> * R-11
> * R-10
> * R-9
> 
> Nova is in the process of figuring out their date. If we have that, it
> would be good to try to avoid an overlap there. Our linked midcycle
> session worked out well, but probably better if they don't conflict.

Thank you!

> 
> We also need to work out locations. Anyone able and willing to host,
> just let me know. We need a facility with wifi, able to hold ~30-40
> people, wifi, close to an airport. And wifi.

Working wiki, yeah?

> 
> At some point I still think it would be nice for our international folks
> to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
> Collins our somewhere similar.
> 
> Thanks!
> 
> Sean (smcginnis)

Thanks Sean, appreciate this being discussed now.

Thanks again,
Anita.

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread ClaytonLuce, Timothy
And I'll add Raleigh NC NetApp is available too.  Nothing better than summer in 
the South :)

-Original Message-
From: D'Angelo, Scott [mailto:scott.dang...@hpe.com] 
Sent: Tuesday, April 12, 2016 10:39 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Cinder] Newton Midcycle Planning

I'll throw this out there: Fort Collins HPE site is available.

Scott D'Angelo (scottda)

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Tuesday, April 12, 2016 8:05 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder] Newton Midcycle Planning

Hey Cinder team (and those interested),

We've had a few informal conversations on the channel and in meetings, but 
wanted to capture some things here and spread awareness.

I think it would be good to start planning for our Newton midcycle.
These have been incredibly productive in the past (at least in my
opinion) so I'd like to get it on the schedule so folks can start planning for 
it.

For Mitaka we held our midcycle in the R-10 week. That seemed to work out 
pretty well, but I also think it might be useful to hold it a little earlier in 
the cycle to keep some momentum going and make sure things stay pretty focused 
for the rest of the cycle.

For reference, here is the current release schedule for Newton:

http://releases.openstack.org/newton/schedule.html

R-10 puts us in the last week of July.

I would have a conflict R-16, R-15. We probably want to avoid US Independence 
Day R-13, and milestone weeks R-18 and R12.

So potential weeks look like:

* R-17
* R-14
* R-11
* R-10
* R-9

Nova is in the process of figuring out their date. If we have that, it would be 
good to try to avoid an overlap there. Our linked midcycle session worked out 
well, but probably better if they don't conflict.

We also need to work out locations. Anyone able and willing to host, just let 
me know. We need a facility with wifi, able to hold ~30-40 people, wifi, close 
to an airport. And wifi.

At some point I still think it would be nice for our international folks to be 
able to do a non-US midcycle, but I'm fine if we end up back in Ft Collins our 
somewhere similar.

Thanks!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [API]Make API errors conform to the common error message without microversion

2016-04-12 Thread Brandon Logan
As a note, there will be a design session around the API refactor
efforts going on.  Microversioning will be a topic.

On Tue, 2016-04-12 at 14:59 +0200, Ihar Hrachyshka wrote:
> Xianshan  wrote:
> 
> > Hi, Duncan & michael,
> > Thanks a lot for your replies.
> >
> > Definitely I agree with you that the microversion is the best approach to  
> > solve the backwards compat,
> > and the neutron is also going to adopt it [1]. But it will take a long  
> > time to totally introduce it into neutron I think.
> > So IMO, we can continue this discussion and then implement this feature  
> > in parallel with the microversion.
> >
> > Actually, according to the design [2], only a slight change will be done  
> > once the microversion landed, i.e.
> > replace the ' new header ' with the microversion to control the final  
> > format about the error message in the wsgi interface.
> >
> > [1] https://review.openstack.org/#/c/136760/
> > [2] https://review.openstack.org/#/c/298704
> 
> If no one is going to help with microversioning, it won’t ever happen. I  
> suggest we consolidate whichever resources we have to get it done, instead  
> of working on small API iterations as proposed in the thread.
> 
> Ihar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Anita Kuno
On 04/11/2016 09:45 PM, Tony Breeds wrote:
> On Mon, Apr 11, 2016 at 03:49:16PM -0500, Matt Riedemann wrote:
>> A few people have been asking about planning for the nova midcycle for
>> newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work the
>> best. R-14 is close to the US July 4th holiday, R-13 is during the week of
>> the US July 4th holiday, and R-12 is the week of the n-2 milestone.
> 
> Thanks for starting this now.  It really helps  to know these things early.
> 
> This cycle *may* be harder than typical with:
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9478
> 
> Having said that, either of those options work for me.
> 
>> As far as a venue is concerned, I haven't heard any offers from companies to
>> host yet. If no one brings it up by the summit, I'll see if hosting in
>> Rochester, MN at the IBM site is a possibility.
> 
> +1 would Rochester again.  The drive from MSP was trivial ;P

Also consider the local international airport, it is not bad.

Thanks,
Anita.

> 
> Yours Tony.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] RDO Mitaka packages released

2016-04-12 Thread Rich Bowen
The RDO community is pleased to announce the general availability of the
RDO build for OpenStack Mitaka for RPM-based distributions - CentOS
Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building
private, public, and hybrid clouds and Mitaka is the 13th release from
the OpenStack project (http://openstack.org), which is the work of more
than 2500 contributors from around the world.
(Source: http://stackalytics.com/ )

See Red Hat Stack
(http://redhatstackblog.redhat.com/2016/03/21/learn-whats-coming-in-openstack-mitaka/
for a brief overview of what's new in Mitaka.

The RDO community project (https://www.rdoproject.org/) curates,
packages, builds, tests, and maintains a complete OpenStack component
set for RHEL and CentOS Linux and is a founding member of the CentOS
Cloud Infrastructure SIG
(https://wiki.centos.org/SpecialInterestGroup/Cloud). The Cloud
Infrastructure SIG focuses on delivering a great user experience for
CentOS Linux users looking to build and maintain their own on-premise,
public or hybrid clouds.

All work on RDO, and on the downstream release, Red Hat OpenStack
Platform, is 100% open source, with all code changes going upstream first.

For a complete list of what's in RDO, see the RDO projects yaml file
(https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml).


Getting Started

There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware,
try the RDO QuickStart (http://rdoproject.org/Quickstart)  You can run
RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart
(https://www.rdoproject.org/tripleo/) and you'll be running a production
cloud in short order.

Finally, if you want to try out OpenStack, but don't have the time or
hardware to run it yourself, visit TryStack (http://trystack.org/),
where you can use a free public OpenStack instance, running RDO
packages, to experiment with the OpenStack management interface and API,
launch instances, configure networks, and generally familiarize yourself
with OpenStack.


Getting Help

The RDO Project participates in a Q service at
http://ask.openstack.org, for more developer oriented content we
recommend joining the rdo-list mailing list
(https://www.redhat.com/mailman/listinfo/rdo-list). Remember to post a
brief introduction about yourself and your RDO story. You can also find
extensive documentation on the RDO docs site
(https://www.rdoproject.org/documentation).

We also welcome comments and requests on the CentOS Mailing lists
(https://lists.centos.org/) and the CentOS IRC Channels ( #centos on
irc.freenode.net ), however we have a more focused audience in the RDO
venues.


Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO
community pages (https://www.rdoproject.org/community/) and the CentOS
Cloud SIG page (https://wiki.centos.org/SpecialInterestGroup/Cloud). See
also the RDO packaging documentation
(https://www.rdoproject.org/packaging/).

Join us in #rdo on the Freenode IRC network, and follow us at
@RDOCommunity (http://twitter.com/rdocommunity) on Twitter.

And, if you're going to be in Austin for the OpenStack Summit two weeks
from now, join us on Thursday at 4:40pm for the RDO community BoF
(https://goo.gl/P6kyWR).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread D'Angelo, Scott
I'll throw this out there: Fort Collins HPE site is available.

Scott D'Angelo (scottda)

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Tuesday, April 12, 2016 8:05 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder] Newton Midcycle Planning

Hey Cinder team (and those interested),

We've had a few informal conversations on the channel and in meetings,
but wanted to capture some things here and spread awareness.

I think it would be good to start planning for our Newton midcycle.
These have been incredibly productive in the past (at least in my
opinion) so I'd like to get it on the schedule so folks can start
planning for it.

For Mitaka we held our midcycle in the R-10 week. That seemed to work
out pretty well, but I also think it might be useful to hold it a little
earlier in the cycle to keep some momentum going and make sure things
stay pretty focused for the rest of the cycle.

For reference, here is the current release schedule for Newton:

http://releases.openstack.org/newton/schedule.html

R-10 puts us in the last week of July.

I would have a conflict R-16, R-15. We probably want to avoid US
Independence Day R-13, and milestone weeks R-18 and R12.

So potential weeks look like:

* R-17
* R-14
* R-11
* R-10
* R-9

Nova is in the process of figuring out their date. If we have that, it
would be good to try to avoid an overlap there. Our linked midcycle
session worked out well, but probably better if they don't conflict.

We also need to work out locations. Anyone able and willing to host,
just let me know. We need a facility with wifi, able to hold ~30-40
people, wifi, close to an airport. And wifi.

At some point I still think it would be nice for our international folks
to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
Collins our somewhere similar.

Thanks!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Flavio Percoco

On 11/04/16 18:05 +, Amrith Kumar wrote:

Adrian, thx for your detailed mail.



Yes, I was hopeful of a silver bullet and as we’ve discussed before (I think it
was Vancouver), there’s likely no silver bullet in this area. After that
conversation, and some further experimentation, I found that even if Trove had
access to a single Compute API, there were other significant complications
further down the road, and I didn’t pursue the project further at the time.



Adrian, Amrith,

I've spent enough time researching on this area during the last month and my
conclusion is pretty much the above. There's no silver bullet in this area and
I'd argue there shouldn't be one. Containers, bare metal and VMs differ in such
a way (feature-wise) that it'd not be good, as far as deploying databases goes,
for there to be one compute API. Containers allow for a different deployment
architecture than VMs and so does bare metal.



We will be discussing Trove and Containers in Austin [1] and I’ll try and close
the loop with you on this while we’re in Town. I still would like to come up
with some way in which we can offer users the option of provisioning database
as containers.


As the person leading this session, I'm also looking forward to providing such
provisioning facilities to Trove users. Let's do this.

Cheers,
Flavio



Thanks,



-amrith



[1] https://etherpad.openstack.org/p/trove-newton-summit-container



From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: Monday, April 11, 2016 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)

Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)



Amrith,



I respect your point of view, and agree that the idea of a common compute API
is attractive… until you think a bit deeper about what that would mean. We
seriously considered a “global” compute API at the time we were first
contemplating Magnum. However, what we came to learn through the journey of
understanding the details of how such a thing would be implemented, that such
an API would either be (1) the lowest common denominator (LCD) of all compute
types, or (2) an exceedingly complex interface. 




You expressed a sentiment below that trying to offer choices for VM, Bare Metal
(BM), and Containers for Trove instances “adds considerable complexity”.
Roughly the same complexity would accompany the use of a comprehensive compute
API. I suppose you were imagining an LCD approach. If that’s what you want,
just use the existing Nova API, and load different compute drivers on different
host aggregates. A single Nova client can produce VM, BM (Ironic), and
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s
configured in this way. That’s what we do. Flavors determine which compute type
you get.



If what you meant is that you could tap into the power of all the unique
characteristics of each of the various compute types (through some modular
extensibility framework) you’ll likely end up with complexity in Trove that is
comparable to integrating with the native upstream APIs, along with the
disadvantage of waiting for OpenStack to continually catch up to the pace of
change of the various upstream systems on which it depends. This is a recipe
for disappointment.



We concluded that wrapping native APIs is a mistake, particularly when they are
sufficiently different than what the Nova API already offers. Containers APIs
have limited similarities, so when you try to make a universal interface to all
of them, you end up with a really complicated mess. It would be even worse if
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s
approach is to offer the upstream native API’s for the different container
orchestration engines (COE), and compose Bays for them to run on that are built
from the compute types that OpenStack supports. We do this by using different
Heat orchestration templates (and conditional templates) to arrange a COE on
the compute type of your choice. With that said, there are still gaps where not
all storage or network drivers work with Ironic, and there are non-trivial
security hurdles to clear to safely use Bays composed of libvirt-lxc instances
in a multi-tenant environment.



My suggestion to get what you want for Trove is to see if the cloud has Magnum,
and if it does, create a bay with the flavor type specified for whatever
compute type you want, and then use the native API for the COE you selected for
that bay. Start your instance on the COE, just like you use Nova today. This
way, you have low complexity in Trove, and you can scale both the number of
instances of your data nodes (containers), and the infrastructure on which they
run (Nova instances).



Regards,



Adrian







   On Apr 11, 2016, at 8:47 AM, Amrith Kumar  wrote:



   Monty, Dims, 

Re: [openstack-dev] [stackalytics] Proposal for some code/feature changes

2016-04-12 Thread Ilya Shakhat
Hi Nikhil,

2016-04-12 5:59 GMT+03:00 Nikhil Komawar :

> Hello,
>
> I was hoping to make some changes to the stackalytics dashboard
> specifically of this type [1] following my requested suggestions here
> [2]; possibly add a few extra columns for +0s and just Bot +1s. I think
> having this info gives much clearer picture of the kind of reviews
> someone is/wants to be involved in. I couldn't find documentation in the
> README or anywhere else and the minimal amount of docstrings are making
> it difficult for me to figure the changes.
>
> What's the best possible route to accomplish this?
>

Well, I see two different metrics here: the first counts +0s and the second
is additional analytic over existing reviews.

Counting +0s or comments is something that is asked from time to time and
something that I'd like to avoid. The reason is that retrieving comments
lead to higher load on Gerrit and slows down the update cycle.

However Stackalytics already retrieves comments for some projects (those
that have external CIs, like nova), so we can try this new metric there as
experiment. The metric should probably be different from "reviews" not to
skew the current numbers. As of implementation, the changes should be in
both processor and dashboard; the code similar to existing counting reviews.

The second feature is counting votes against patch-sets posted by bots.
This one should be easier to implement and all data is already available.
In the report like [1] every vote record can be extended with info from
patch-set; the filtering should take into account the author's company, all
bots are assigned to '*robots'.

Thanks,
Ilya


>
> [1] http://stackalytics.com/report/contribution/astara-group/30
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2016-April/091836.html
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Flavio Percoco

On 11/04/16 16:53 +, Adrian Otto wrote:

Amrith,

I respect your point of view, and agree that the idea of a common compute API
is attractive… until you think a bit deeper about what that would mean. We
seriously considered a “global” compute API at the time we were first
contemplating Magnum. However, what we came to learn through the journey of
understanding the details of how such a thing would be implemented, that such
an API would either be (1) the lowest common denominator (LCD) of all compute
types, or (2) an exceedingly complex interface. 


You expressed a sentiment below that trying to offer choices for VM, Bare Metal
(BM), and Containers for Trove instances “adds considerable complexity”.
Roughly the same complexity would accompany the use of a comprehensive compute
API. I suppose you were imagining an LCD approach. If that’s what you want,
just use the existing Nova API, and load different compute drivers on different
host aggregates. A single Nova client can produce VM, BM (Ironic), and
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s
configured in this way. That’s what we do. Flavors determine which compute type
you get.

If what you meant is that you could tap into the power of all the unique
characteristics of each of the various compute types (through some modular
extensibility framework) you’ll likely end up with complexity in Trove that is
comparable to integrating with the native upstream APIs, along with the
disadvantage of waiting for OpenStack to continually catch up to the pace of
change of the various upstream systems on which it depends. This is a recipe
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are
sufficiently different than what the Nova API already offers. Containers APIs
have limited similarities, so when you try to make a universal interface to all
of them, you end up with a really complicated mess. It would be even worse if
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s
approach is to offer the upstream native API’s for the different container
orchestration engines (COE), and compose Bays for them to run on that are built
from the compute types that OpenStack supports. We do this by using different
Heat orchestration templates (and conditional templates) to arrange a COE on
the compute type of your choice. With that said, there are still gaps where not
all storage or network drivers work with Ironic, and there are non-trivial
security hurdles to clear to safely use Bays composed of libvirt-lxc instances
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum,
and if it does, create a bay with the flavor type specified for whatever
compute type you want, and then use the native API for the COE you selected for
that bay. Start your instance on the COE, just like you use Nova today. This
way, you have low complexity in Trove, and you can scale both the number of
instances of your data nodes (containers), and the infrastructure on which they
run (Nova instances).



I've been researching on this area and I've reached pretty much the same
conclusion. I've had moments of wondering whether creating bays is something
Trove should do but I now think it should.

The need of handling the native API is the part I find a bit painful as that
means more code needs to happen in Trove for us to provide this provisioning
facilities. I wonder if a common *library* would help here, at least to handle
those "simple" cases. Anyway, I look forward to chatting with you all about 
this.

It'd be great if you (and other magnum folks) could join this session:

https://etherpad.openstack.org/p/trove-newton-summit-container

Thanks for chiming in, Adrian.
Flavio


Regards,

Adrian




   On Apr 11, 2016, at 8:47 AM, Amrith Kumar  wrote:

   Monty, Dims, 


   I read the notes and was similarly intrigued about the idea. In particular,
   from the perspective of projects like Trove, having a common Compute API is
   very valuable. It would allow the projects to have a single view of
   provisioning compute, as we can today with Nova and get the benefit of bare
   metal through Ironic, VM's through Nova VM's, and containers through
   nova-docker.

   With this in place, a project like Trove can offer database-as-a-service on
   a spectrum of compute infrastructures as any end-user would expect.
   Databases don't always make sense in VM's, and while containers are great
   for quick and dirty prototyping, and VM's are great for much more, there
   are databases that will in production only be meaningful on bare-metal.

   Therefore, if there is a move towards offering a common API for VM's,
   bare-metal and containers, that would be huge.

   Without such a mechanism, consuming containers in Trove adds considerable
   complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a
   working prototype of Trove 

Re: [openstack-dev] [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-12 Thread Chris Dent


This discussion needs to be happening on openstack-dev too, so
cc'ing that list in as well. The top of the thread is at
http://lists.openstack.org/pipermail/openstack/2016-April/015864.html

On Tue, 12 Apr 2016, Chris Dent wrote:


On Tue, 12 Apr 2016, Nadya Shakhat wrote:


   I'd like to discuss one question with you. Perhaps, you remember that
in Liberty we decided to get rid of transformers on polling agents [1]. I'd
like to describe several issues we are facing now because of this decision.
1. pipeline.yaml inconsistency.


The original idea with centralizing the transformers to just the
notification agents was to allow a few different things, only one of
which has happened:

* Make the pollster code less complex with few dependencies,
 easing deployment options (this has happened), maintenance
 and custom pollsters.

 With the transformers in the pollsters they must maintain a
 considerable amount of state that makes effective use of eventlet
 (or whatever the chosen concurrent solution is) more difficult.

 The ideal pollster is just something that spits out a dumb piece
 of identified data every interval. And nothing else.

* Make it far easier to use and conceptualize the use of pollsters
 outside of the ceilometer environment as simple data collectors.
 In that context transformation would occur only close to the data
 consumption not at the data production.

 This, following the good practice of services doing one thing
 well.

* Migrate away from the pipeline.yaml that conflated sources and
 sinks to a model that is good both for computers and humans:

 * sources over here
 * sinks over here

That these other things haven't happened means we're in an awkward
situation.

Are the options the following?

* Do what you suggest and pull transformers back into the pollsters.
 Basically revert the change. I think this is the wrong long term
 solution but might be the best option if there's nobody to do the
 other options.

* Implement a pollster.yaml for use by the pollsters and consider
 pipeline.yaml as the canonical file for the notification agents as
 there's where the actual _pipelines_ are. Somewhere in there kill
 interval as a concept on pipeline side.

 This of course doesn't address the messaging complexity. I admit
 that I don't understand all the issues there but it often feels
 like we are doing that aspect of things completely wrong, so I
 would hope that before we change things there we consider all the
 options.

What else?

One probably crazy idea: What about figuring out the desired end-meters
of common transformations and making them into dedicated pollsters?
Encapsulating that transformation not at the level of the polling
manager but at the individual pollster.




--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Generic solution for bare metal testing

2016-04-12 Thread Jim Rollenhagen
On Thu, Apr 07, 2016 at 02:42:09AM +, Jeremy Stanley wrote:
> On 2016-04-06 18:33:06 +0300 (+0300), Igor Belikov wrote:
> [...]
> > I suppose there are security issues when we talk about running
> > custom code on bare metal slaves, but I'm not sure I understand
> > the difference from running custom code on a virtual machine if
> > bare metal nodes are isolated, don't contain any sensitive data
> > and follow a regular redeployment procedure.
> [...]
> 
> With a virtual machine, you can delete it and create a new one.
> Nothing remains behind.
> 
> With a physical machine, arbitrary code running in the scope of a
> test with root access can do _nasty_ things like backdoor your
> server firmware with shims that even masquerade as the firmware
> updater and persist through redeployments that include firmware
> refreshes.
> 
> Physical servers persist, and are therefore vulnerable in this
> scenario in ways which virtual servers are not.

Right, it's a huge effort to run a secure bare metal cloud running
arbitrary code. Homogenous hardware and vendor cooperation is a must,
and that's only part of it.

I don't foresee the infra team having the resources to take on such a
task any time soon (but of course, I'm not well-informed on the infra
team's workload).

Another option for baremetal in the gate is baremetal flavors in other
public clouds - Rackspace has one (OnMetal) but doesn't yet support
custom images, and others have launched or are working on one. Once
there's two clouds that support baremetal with custom images, we could
put those resources in the upstream CI pool.

// jim

> -- 
> Jeremy Stanley
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Michael Johnson
Armando,

Is there any way we can move the "Neutron: Development track: future
of *-aas projects" session?  I overlaps with the LBaaS talk:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/6893?goback=1

Michael


On Mon, Apr 11, 2016 at 9:56 PM, Armando M.  wrote:
> Hi folks,
>
> A provisional schedule for the Neutron project is available [1]. I am still
> working with the session chairs and going through/ironing out some details
> as well as gathering input from [2].
>
> I hope I can get something more final by the end of this week. In the
> meantime, please free to ask questions/provide comments.
>
> Many thanks,
> Armando
>
> [1]
> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
> [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Sean McGinnis
Hey Cinder team (and those interested),

We've had a few informal conversations on the channel and in meetings,
but wanted to capture some things here and spread awareness.

I think it would be good to start planning for our Newton midcycle.
These have been incredibly productive in the past (at least in my
opinion) so I'd like to get it on the schedule so folks can start
planning for it.

For Mitaka we held our midcycle in the R-10 week. That seemed to work
out pretty well, but I also think it might be useful to hold it a little
earlier in the cycle to keep some momentum going and make sure things
stay pretty focused for the rest of the cycle.

For reference, here is the current release schedule for Newton:

http://releases.openstack.org/newton/schedule.html

R-10 puts us in the last week of July.

I would have a conflict R-16, R-15. We probably want to avoid US
Independence Day R-13, and milestone weeks R-18 and R12.

So potential weeks look like:

* R-17
* R-14
* R-11
* R-10
* R-9

Nova is in the process of figuring out their date. If we have that, it
would be good to try to avoid an overlap there. Our linked midcycle
session worked out well, but probably better if they don't conflict.

We also need to work out locations. Anyone able and willing to host,
just let me know. We need a facility with wifi, able to hold ~30-40
people, wifi, close to an airport. And wifi.

At some point I still think it would be nice for our international folks
to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
Collins our somewhere similar.

Thanks!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][reno] Broken releasenotes in glance_store

2016-04-12 Thread Andreas Jaeger
On 2016-04-12 08:31, Andreas Jaeger wrote:
> On 2016-04-12 03:33, Nikhil Komawar wrote:
>> To close this:
>>
>> This has been fixed as a part of the earlier opened bug
>> https://bugs.launchpad.net/glance-store/+bug/1568767 and other is
>> duplicated.
>>
> 
> Yes, it's fixed now.

And you have release notes:

http://docs.openstack.org/releasenotes/glance_store/

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas and the need for reservation

2016-04-12 Thread Andrew Laski


On Tue, Apr 5, 2016, at 09:57 AM, Ryan McNair wrote:
> >It is believed that reservation help to to reserve a set of resources
> >beforehand and hence eventually preventing any other upcoming request
> >(serial or parallel) to exceed quota if because of original request the
> >project might have reached the quota limits.
> >
> >Questions :-
> >1. Does reservation in its current state as used by Nova, Cinder, Neutron
> >help to solve the above problem ?
> 
> In Cinder the reservations are useful for grouping quota
> for a single request, and if the request ends up failing
> the reservation gets rolled back. The reservations also
> rollback automatically if not committed within a certain
> time. We also use reservations with Cinder nested quotas
> to group a usage request that may propagate up to a parent
> project in order to manage commit/rollback of the request
> as a single unit.
> 
> >
> >2. Is it consistent, reliable ?  Even with reservation can we run into
> >in-consistent behaviour ?
> 
> Others can probably answer this better, but I have not
> seen the reservations be a major issue. In general with
> quotas we're not doing the check and set atomically which
> can get us in an inconsistent state with quota-update,
> but that's unrelated to the reservations.
> 
> >
> >3. Do we really need it ?
> >
> 
> Seems like we need *some* way of keeping track of usage
> reserved during a particular request and a way to easily
> roll that back at a later time. I'm open to alternatives
> to reservations, just wondering what the big downside of
> the current reservation system is.

Jay goes into it a little bit in his response to another quota thread
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090560.html
and I share his thoughts here.

With a reservation system you're introducing eventual consistency into
the system rather than being strict because reservations are not tied to
a concrete thing. You can't do a point in time check of whether the
reserved resources are going to eventually be used if something happens
like a service restart and a request is lost. You have to have no
activity for the duration of the expiration time to let things settle
before getting a real view of quota usages.

Instead if you tie quota usage to the resource records then you can
always get a view of what's actually in use.

One thing that should probably be clarified in all of these discussion
is what exactly is the quota on. I see two answers: the quota is against
the actual resource usage, or the quota is against the records tracking
usage. Since we currently track quotas with a reservation system I think
it's fair to say that we're not actually tracking against resource like
disk/RAM/CPU being in use. I would consider the resource to be in use as
soon as there's a db record for something like an instance or volume
which indicates that there will be consumption of those resources. What
that means in practice is that quotas can be committed right away and
there doesn't seem to be any need for a reservation system.



> 
> - Ryan McNair (mc_nair)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-12 Thread Jim Rollenhagen
On Thu, Apr 07, 2016 at 06:36:20AM -0400, Sean Dague wrote:
> On 04/07/2016 03:26 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
> > Hi Nova, Ops, stackers,
> >  
> > I am trying to figure out different use cases and requirements there
> > would be for host maintenance and would like to get feedback and
> > transfer all this to spec and discussion what could and should land for
> > Nova or other places.
> >  
> > As working in OPNFV Doctor project that has the Telco perspective about
> > related requirements, I started to draft a spec based on something
> > smaller that would be nice to have in Nova and less complicated to have
> > it in single cycle. Anyhow the feedback from Nova API team was to look
> > this as a whole and gather more. This is why asking this here and not
> > just trough spec, to get input for requirements and use cases with wider
> > audience. Here is the draft spec proposing first just maintenance window
> > to be added:
> > _https://review.openstack.org/296995/_
> >  
> > Here is link to OPNFV Doctor requirements:
> > _http://artifacts.opnfv.org/doctor/docs/requirements/02-use_cases.html#nvfi-maintenance_
> > _http://artifacts.opnfv.org/doctor/docs/requirements/03-architecture.html#nfvi-maintenance_
> > _http://artifacts.opnfv.org/doctor/docs/requirements/05-implementation.html#nfvi-maintenance_
> >  
> > Here is what I could transfer as use cases, but would ask feedback to
> > get more:
> >  
> > As admin I want to set maintenance period for certain host.
> >  
> > As admin I want to know when host is ready to actions to be done by admin
> > during the maintenance. Meaning physical resources are emptied.
> >  
> > As owner of a server I want to prepare for maintenance to minimize downtime,
> > keep capacity on needed level and switch HA service to server not
> > affected by
> > maintenance.
> >  
> > As owner of a server I want to know when my servers will be down because of
> > host maintenance as it might be servers are not moved to another host.
> >  
> > As owner of a server I want to know if host is to be totally removed, so
> > instead of keeping my servers on host during maintenance, I want to move
> > them
> > to somewhere else.
> >  
> > As owner of a server I want to send acknowledgement to be ready for host
> > maintenance and I want to state if servers are to be moved or kept on host.
> > Removal and creating of server is in owner's control already. Optionally
> > server
> > Configuration data could hold information about automatic actions to be
> > done
> > when host is going down unexpectedly or in controlled manner. Also
> > actions at
> > the same if down permanently or only temporarily. Still this needs
> > acknowledgement from server owner as he needs time for application level
> > controlled HA service switchover.
> 
> While I definitely understand the value of these in a deployement, I'm a
> bit concerned of baking all this structured data into Nova itself. As it
> effectively means putting some degree of a ticket management system in
> Nova that's specific to a workflow you've decided on here. Baked in
> workflow is hard to change when the needs of an industry do.
> 
> My counter proposal on your spec was to provide a free form field
> associated with maintenance mode which could contain a url linking to
> the details. This could be a jira ticket, or a REST url for some other
> service. This would actually be much like how we handle images in Nova,
> with a url to glance to find more info.

FWIW, this is what we do in ironic. A maintenance boolean, and a
maintenance_reason text field that operators can dump text/links/etc in.

As an example:
$ ironic node-set-maintenance $uuid on --reason "Dead fiber // ticket 123 // 
jroll 2016/04/12"

It's worked well for Rackspace's deployment, at least, and I seem to
remember others being happy with it as well.

// jim

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] doc8 0.7.0 release

2016-04-12 Thread no-reply
We are content to announce the release of:

doc8 0.7.0: Style checker for Sphinx (or other) RST documentation

For more details, please see below.

Changes in doc8 0.6.0..0.7.0


a96c31f Skip long line check for rst definition list term
f6cb930 Remove argparse from requirements
5abd304 Remove support for py33/py26
8ed2fba remove python 2.6 trove classifier
0e7e4b1 Put py34 first in the env order of tox
b0dd12d Deprecated tox -downloadcache option removed
864e093 Added the launchpad bug url and fixed one typo
1791030 Update .gitreview for new namespace
5115b86 Use a more relevant launchpad home-page url
17f56bb Fix grammar issue in README.rst
65138ae Fix invalid table formatting
64f22e4 Fix issue of checking max_len for directives

Diffstat (except docs and test files)
-

.gitignore|  2 +-
.gitreview|  2 +-
CONTRIBUTING.rst  |  3 +++
README.rst| 28 ++--
requirements.txt  |  1 -
setup.cfg |  5 +
tox.ini   |  5 +
9 files changed, 76 insertions(+), 26 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 86a2ba2..8fcc512 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +4,0 @@
-argparse



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][glance] glance_store 0.13.1 release (mitaka)

2016-04-12 Thread no-reply
We are tickled pink to announce the release of:

glance_store 0.13.1: OpenStack Image Service Store Library

This release is part of the mitaka stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/glance_store

With package available at:

https://pypi.python.org/pypi/glance_store

Please report issues through launchpad:

http://bugs.launchpad.net/glance-store

For more details, please see below.

Changes in glance_store 0.13.0..0.13.1
--

33c08d8 Update .gitreview for stable/mitaka
4600adb Mock swiftclient's functions in tests

Diffstat (except docs and test files)
-

.gitreview  |  1 +
2 files changed, 47 insertions(+), 3 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Change openstack.node to openstack.cluster

2016-04-12 Thread Weyl, Alexey (Nokia - IL)
Hi,

After some discussion, we have decided to change "openstack.node" to 
"openstack.cluster" as the "type" of the openstack cluster in the graph, to 
conform with the standard openstack terminology.

Alexey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-12 Thread Andreas Scheuring
I must admit, I really like this idea of getting rid of all the devstack
params. It's always a mess looking up the functionality of the various
variables in the code when trying out something new.

I also understand the concern that was raised by somebody (couldn't find
it anymore) that the Neutron config defaults must not necessarily be the
values that make sense for devstack. So what about having some code in
the neutron devstack plugin defining the default values without using
fancy variables - so just using the initset?

On the other hand I find it useful to have access to some variables like
the HOST_IP or PHYSICAL_NETWORK. I don't want to hardcode my IP into
every setting and maybe I want to define a interface or bridgemapping
that takes use of the PHYSICAL_NETWORK var.
However it's still possible to define you own var in the local.conf and
use them in post_config...

But you already kind of convinced me to to that post-config way with the
macvtap integration... cause then it's just a matter of documentation...

Thanks


-- 
-
Andreas (IRC: scheuran) 

On Fr, 2016-04-08 at 15:07 +, Sean M. Collins wrote:
> Prior to the introduction of local.conf, the only way to configure
> OpenStack components was to introduce code directly into DevStack, so
> that DevStack would pick it up then inject it into the configuration
> file.
> 
> This was because DevStack writes out new configuration files on each
> run, so it wasn't possible for you to make changes to any configuration
> file (nova.conf, neutron.conf, ml2_plugin.ini, etc..).
> 
> So, someone who wanted to set the Linux Bridge Agent's
> physical_interface_mappings setting for Neutron would have to use
> $LB_INTERFACE_MAPPINGS in DevStack, which would then be invoked by
> DevStack[1].
> 
> The local.conf functionality was introduced quite a while back, and
> I think it's time to have a conversation about why we should start
> moving away from the previous practice of declaring variables in
> DevStack, and then having them injected into the configuration files.
> 
> The biggest issue is: There is a disconnect between the developers
> using DevStack and someone who is an operator or who has been editing
> OpenStack conf files directly. So, for example I can tell you all about
> how DevStack has a bunch of variables for configuring Neutron (which is
> Not a Good Thing™), and how those go into DevStack and then end up coming
> out the other side in a Neutron configuration file.
> 
> Really, I would like to get rid of the intermediate layer (DevStack)
> and get both Devs and Deployers to be able to just say: Here's my
> neutron.conf - let's diff mine and yours and see what we need to sync.
> 
> Matt Kassawara and I have had this issue, since he's coming from the
> OSAD side, and I'm coming from the DevStack side. We both know what the
> Neutron configuration should end up as, but DevStack having its own set
> of variables and how those variables are handled and eventually rendered
> as a Neutron config file makes things more difficult than it needs to
> be, since Matt has to now go and learn about how DevStack handles all
> these Neutron specific variables.
> 
> The Neutron refactor[2] that I am working on, I am trying to configure
> as little as possible in DevStack. Neutron should be able to, out of the
> box, Just Work™. If it can't, then that needs to be fixed in Neutron.
> 
> Secondly, the Neutron refactor will be getting rid of all the things
> like $LB_INTERFACE_MAPPINGS - I would *much* prefer that someone using
> DevStack actually set the apropriate line in their local.conf
> 
> Such as:
> 
> [[post-config|/$Q_PLUGIN_CONF_FILE]]
> [linux_bridge]
> physical_interface_mappings = foo:bar
> 
> 
> The advantage of this is, when someone is working with DevStack, the
> things they are configuring are the same as all the other OpenStack 
> documentation.
> 
> For example, someone could read the Networking Guide, read the example
> configuration[3] and the only thing they'd need to learn is our syntax
> for specifying what file the contents go in (the 
> "[[post-config|/$Q_PLUGIN_CONF_FILE]]" piece).
> 
> Thoughts?
> 
> [1]: 
> https://github.com/openstack-dev/devstack/blob/1195a5b7394fc5b7a1cb1415978e9997701f5af1/lib/neutron_plugins/linuxbridge_agent#L63
> 
> [2]: https://review.openstack.org/168438
> 
> [3]: 
> http://docs.openstack.org/liberty/networking-guide/scenario-classic-lb.html#example-configuration
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-12 Thread Jim Rollenhagen
On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
> Maybe we can continue the discussion here, as there's no enough time in the
> irc meeting :)

Someone mentioned this would make a good summit session, as there's a
few competing proposals that are all good options. I do welcome
discussion here until then, but I'm going to put it on the schedule. :)

// jim

> 
> On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu  wrote:
> 
> >
> > Ironic is currently using shellinabox to provide a serial console, but
> > it's not compatible
> > with nova, so I would like to propose a new console type and a custom HTTP
> > proxy [1]
> > which validate token and connect to ironic console from nova.
> >
> > On Horizon side, we should add support for the new console type [2] as
> > well, here are some screenshots from my local environment.
> >
> >
> >
> > ​
> >
> > Additionally, shellinabox console ports management should be improved in
> > ironic, instead of manually specified, we should introduce dynamically
> > allocation/deallocation [3] mechanism.
> >
> > Functionality is being implemented in Nova, Horizon and Ironic:
> > https://review.openstack.org/#/q/topic:bp/shellinabox-http-proxy
> > https://review.openstack.org/#/q/topic:bp/ironic-shellinabox-console
> > https://review.openstack.org/#/q/status:open+topic:bug/1526371
> >
> >
> > PS: to achieve this goal, we can also add a new console driver in ironic
> > [4], but I think it doesn't conflict with this, as shellinabox is capable
> > to integrate with nova, and we should support all console drivers.
> >
> >
> > [1] https://blueprints.launchpad.net/nova/+spec/shellinabox-http-proxy
> > [2]
> > https://blueprints.launchpad.net/horizon/+spec/ironic-shellinabox-console
> > [3] https://bugs.launchpad.net/ironic/+bug/1526371
> > [4] https://bugs.launchpad.net/ironic/+bug/1553083
> >
> > --
> > Best Regards,
> > Zhenguo Niu
> >
> 
> 
> 
> -- 
> Best Regards,
> Zhenguo Niu




> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] [Live Migration] Prevent invalid live migration instead of failing and setting instance to error state after porbinding failed

2016-04-12 Thread Andreas Scheuring
Hi Kevin, thanks for your input! See my comments in line



-- 
-
Andreas (IRC: scheuran) 

On Di, 2016-04-12 at 04:12 -0700, Kevin Benton wrote:
> We can't change the host_id until after the migration or it will break
> l2pop other drivers that use the host as a location indicator (e.g.
> many top of rack drivers do this to determine which switch port should
> be wired up).

I was assuming something like that. Thanks for clarification.


> There is already a patch that went in to inform Neutron of the
> destination host here for proactive DVR
> wiring: https://review.openstack.org/#/c/275420/ During this port
> update phase, we can validate the destination host is 'bindable' with
> the given port info and fail it otherwise. This should block Nova from
> continuing.
> 
> 
> However, we have to figure out how ML2 will know if something is
> 'bindable'. The only interface we have right now is bind_port. It is
> possible that we can do a faked bind_port attempt using what the port
> host_id would look like after migration. It's made clear in the ML2
> driver API that bind_port results may not be
> committed: 
> https://github.com/openstack/neutron/blob/4440297a2ff5a6893b748c2400048e840283c718/neutron/plugins/ml2/driver_api.py#L869

Agree, that was one of the alternatives I had in mind as well. The
introduced API looks good and could be used I think!

> So the workflow would be something like:
> * Nova calls Neutron port update with the destination host in the
> binding details
> * In ML2 port update, the destination host is placed into a copy of
> the port in the host_id field and bind_port is called.
> * If bind_port is unsuccessful, it fails the port update for Nova to
> prevent migration.
> * If bind_port is successful, the results of the port update are
> committed (with the original host_id and the new host_id in the
> destination_host field).
Ideally the update host would then return the copy binding information
instead of the real ones. Doing so, I could use this data to modifiy the
domain.xml before migration with the relevant information.
> * Workflow continues as normal here.
> 
> 
> So this heavily exploits the fact that bind_port is supposed to be
> mutation-free in the ML2 driver API. We may encounter drivers that
> don't follow this now, but they are already exposed to other bugs if
> they mutate state so I think the onus would be on them to fix their
> driver.
> 
> 
> Cheers,
> Kevin Benton
> 

> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-04-12 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >