[openstack-dev] [nova] [scheduler] New filter: AggregateInstanceTypeFilter

2016-07-27 Thread Alonso Hernandez, Rodolfo
Hello:

We have developed a new filter for Nova scheduler 
(SPEC). We have a POC in 
https://review.openstack.org/#/c/346662/.

My question is how to procced with this code:

1)  Merge into nova code. This solution seems not to be accepted (see spec 
comments; also previous versions were merged and reverted in previous releases).

2)  Merge into networking-ovs-dpdk 
(https://github.com/openstack/networking-ovs-dpdk) repo.

3)  Create a new repo to support this new filter.

Which option should we take?

Thank you in advance.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [Sahara] Error launching a defined domain withXML

2016-07-27 Thread Vitaly Gridnev
Hello,

Qiming just added tag [sahara] to the topic of your message.

there are several open questions, can you give answers for them?

1. Can you describe your environment?
2. Can you describe templates you are using? Seems like you are creating
cluster using CLI, could you please post description of each node group of
the cluster? Description of the cluster template is also will be helpful.
3. What images you are using for creating cluster?

On Wed, Jul 27, 2016 at 5:20 AM, 云淡风轻 <821696...@qq.com> wrote:

> 3ks for your reply. But where to add [sahara]?
> is nova.conf?
> Can you detail it?  thanks.
>
>
> -- 原始邮件 --
> *发件人:* "Qiming Teng";;
> *发送时间:* 2016年7月27日(星期三) 上午9:34
> *收件人:* "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *主题:* Re: [openstack-dev] [Sahara] Error launching a defined domain
> withXML
>
> This should add a [sahara] tag at subject line.
>
> - Qiming
>
> On Tue, Jul 26, 2016 at 05:46:25PM +0800, 云淡风轻 wrote:
> > hi ,
> >
> > when create cluster with sahara 4.0.0 in M version ,using command line:
> >
> >
> > $  time openstack dataprocessing cluster create --json
> my_cluster_create_default.json
> > ++--+
> > | Field  | Value|
> > ++--+
> > | Anti affinity  |  |
> > | Cluster template id| b7e8f1b5-1aff-4baf-977b-3ef74d4326cf |
> > | Description| None |
> > | Id | 47ac6404-8fc0-4d3c-b8eb-0f3a2b9e1b2c |
> > | Image  | b3e899c4-a282-4337-a084-50ca0454535e |
> > | Is protected   | False|
> > | Is public  | False|
> > | Is transient   | False|
> > | Name   | my-cluster-default-1 |
> > | Neutron management network | 3aaa392b-af4d-4f70-9e2e-1b71a965ff7d |
> > | Node groups| master:1, worker:1   |
> > | Plugin name| vanilla  |
> > | Status | Validating   |
> > | Use autoconfig | True |
> > | User keypair id| my_stack |
> > | Version| 2.7.1|
> > ++--+
> > $  nova list
> >
> +--+---+++-+--+
> > | ID   | Name  |
> Status | Task State | Power State | Networks |
> >
> +--+---+++-+--+
> > | 4e3fdb6c-6805-447c-b01d-c4cb0fc1ca87 | my-cluster-default-1-master-0 |
> BUILD  | scheduling | NOSTATE |  |
> > | 95dd207b-6047-416c-a95f-6d08aa0c6409 | my-cluster-default-1-worker-0 |
> BUILD  | scheduling | NOSTATE |  |
> >
> +--+---+++-+--+
> >
> >
> >
> >
> > an error occur :
> >
> >
> > log in nova-compute.log
> > 2016-07-25 18:09:47.440 37208 INFO nova.compute.resource_tracker
> [req-8ce86863-8324-4dc3-93bf-a997d3512ab1 - - - - -] Auditing locally
> available compute resources for node localhost.localdomain
> > 2016-07-25 18:09:47.767 37208 WARNING nova.virt.libvirt.driver
> [req-8ce86863-8324-4dc3-93bf-a997d3512ab1 - - - - -] couldn't obtain the
> vcpu count from domain id: af17e813-fe6d-4769-b7d3-17369afd7313, exception:
> Requested operation is not valid: cpu affinity is not supported
> > 2016-07-25 18:09:47.768 37208 WARNING nova.virt.libvirt.driver
> [req-8ce86863-8324-4dc3-93bf-a997d3512ab1 - - - - -] couldn't obtain the
> vcpu count from domain id: eaa076c0-2930-4257-8fd7-e36033c4e86c, exception:
> Requested operation is not valid: cpu affinity is not supported
> > 2016-07-25 18:10:41.136 37208 ERROR nova.virt.libvirt.guest
> [req-14a51b87-4b95-4b80-a042-193df77278bb 7fff70fbbf83441a9b3c4d91a5613825
> 6cb156a82d0f486a9f50132be9438eb6 - - -] Error launching a defined domain
> with XML: 
> >   instance-00e3
> >
> >
> > 2016-07-25 18:10:41.259 37208 ERROR nova.compute.manager
> [req-14a51b87-4b95-4b80-a042-193df77278bb 7fff70fbbf83441a9b3c4d91a5613825
> 6cb156a82d0f486a9f50132be9438eb6 - - -] [instance:
> af17e813-fe6d-4769-b7d3-17369afd7313] Instance failed to spawn
> > 2016-07-25 18:10:41.259 37208 ERROR nova.compute.manager [instance:
> af17e813-fe6d-4769-b7d3-17369afd7313] Traceback (most recent call last):
> > 2016-

Re: [openstack-dev] [devstack] libvirt/qemu source install plugin.

2016-07-27 Thread Markus Zoeller
On 26.07.2016 17:45, Michele Paolino wrote:
> I see. In any case, I am open to discuss further contributions and 
> improvement to the plugin. Let me know!
> 
> In case this can be useful for you, in the early implementations of the 
> current devstack plugin (i.e., Patch set 1)[1], it was able to download 
> and install libvirt and qemu from git repositories. The community then 
> suggested to go for the tar releases, and that's where the current 
> implementation comes from.
> 
> [1]https://review.openstack.org/#/c/108714/1
> 
> Regards,

If I understand your devstack plugin correctly, the build of the *.deb
is part of the installation process. I assume this takes some time. If
this is used in a gate test job, it will be done over and over again.
That's why the "apr" plugin [1] was build with the assumption that the
*.deb is pre-build and stored at a location reachable within a gate test
job. The PoC used the Ubuntu Cloud Archives to drill through things.

I have to finish other items before mid/end of August. After that I'd
like to have some kind of short coding-sprint/meeting with you (and
other interested people) to get things up and running. Combining forces
and stuff: https://www.youtube.com/watch?v=ZGegECwSiGY

References:
[1] https://github.com/openstack/devstack-plugin-additional-pkg-repos/


-- 
Regards, Markus Zoeller (markus_z)

> On 07/26/2016 05:23 PM, Mooney, Sean K wrote:
>> Hi I was not aware of the
>> Plugin tar installer but it would not have been usefully in my case as
>> I needed to build from specific git commit id not release tars.
>>
>> For my use case I also need the ability to apply patches automatically to 
>> evaluate change
>> To qemu and Libvirt before they are merged upstream.
>>
>> It would be good to see if we could combine the two though to duplicate
>> Code to build and install Libvirt and qemu.
>>
>> If there is no object I think it still makes sense to create a
>> openstack/devstack-plugin-libvirt-qemu repo then as the 
>> devstack-plugin-tar-installer
>> expcitly will be using tar files not git repos.
>>
>>
>>> -Original Message-
>>> From: Michele Paolino [mailto:m.paol...@virtualopensystems.com]
>>> Sent: Tuesday, July 26, 2016 1:40 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> 
>>> Cc: Kashyap Chamarthy ;
>>> mzoel...@linux.vnet.ibm.com; Mooney, Sean K 
>>> Subject: Re: [openstack-dev] [devstack] libvirt/qemu source install
>>> plugin.
>>>
>>> All,
>>>
>>> the purpose of the devstack-plugin-tar-installer[1] is exactly what you
>>> mentioned: a tool needed to test experimental features in libvirt and
>>> qemu. I am planning to release a new version next week, addressing some
>>> of the comments received, however new testers/developers are more than
>>> welcome! Sean, maybe you can have a look at the code and, if you are
>>> interested, we can discuss how to proceed further.
>>>
>>> I also think it would be nice if we can join all together the efforts
>>> on this project[2], as I believe this is an interesting feature for
>>> devstack. Maybe there is also a way to integrate this work with the
>>> gate Markus was mentioning.
>>>
>>> Thank you Kashyap for pointing this out!
>>>
>>> Regards,
>>>
>>> [1]https://review.openstack.org/#/c/313568/
>>> [2]https://review.openstack.org/#/q/project:openstack/devstack-plugin-
>>> tar-installer
>>>
>>> On 07/26/2016 01:13 PM, Kashyap Chamarthy wrote:
 On Thu, Jul 21, 2016 at 02:25:46PM +0200, Markus Zoeller wrote:
> On 20.07.2016 22:38, Mooney, Sean K wrote:
>> Hi
>> I recently had the need to test a feature (vhost-user reconnect)
>> that was commit to the qemu source tree a few weeks ago. As there
>> has been no release since then I needed to build from source so to
>> that end I wrote a small devstack plugin to do just that.
>>
>> I was thinking of opening a review to create a new repo to host the
>> plugin under The openstack namespace
>> (openstack/devstack-plugin-libvirt-qemu) but before I do I wanted
>>> to
>> ask if others are interested In a devstack plugin that just
>>> compiles
>> and installs qemu and Libvirt?
>>
>> Regards Sean.
>>
> tonby and I try to make the devstack plugin "additional package
>>> repos"
> (apr) work [1]. What you did is within the scope of that project. We
> also have an experimental job
> "gate-tempest-dsvm-nova-libvirt-kvm-apr"[2].  The last time I worked
> on this I wasn't able to create installable *.deb packages from
> libvirt + qemu source code. Other work items did then get more
> important and I had to pause the work on that.  I think we can work
> together to combine our efforts there.
 NB: There's also in-progress work to allow configuring libvirt / QEMU
 from source tar balls, as an external DevStack plugin:

   https://review.openstack.org/#/c/313568/ -- Plugin to setup
   libvirt/QEMU from tar releases

 It was original

Re: [openstack-dev] [heat][requirements] Re: [Openstack-stable-maint] Stable check of openstack/heat failed

2016-07-27 Thread Ihar Hrachyshka

Ethan Lynn  wrote:


Hi Tony,
  I submit a patch to use upper-constraints for review, 
https://review.openstack.org/#/c/347639/ . Let’s wait for the feedback and 
results.


Why is it sent to liberty? isn’t master affected too?

I see no constraints applied in master:  
https://github.com/openstack/heat/blob/master/tox.ini#L9


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] service validation during deployment steps

2016-07-27 Thread Steven Hardy
Hi Emilien,

On Tue, Jul 26, 2016 at 03:59:33PM -0400, Emilien Macchi wrote:
> I would love to hear some feedback about $topic, thanks.

Sorry for the slow response, we did dicuss this on IRC, but providing that
feedback and some other comments below:

> On Fri, Jul 15, 2016 at 11:31 AM, Emilien Macchi  wrote:
> > Hi,
> >
> > Some people on the field brought interesting feedback:
> >
> > "As a TripleO User, I would like the deployment to stop immediately
> > after an resource creation failure during a step of the deployment and
> > be able to easily understand what service or resource failed to be
> > installed".
> >
> > Example:
> > If during step4 Puppet tries to deploy Neutron and OVS, but OVS fails
> > to start for some reasons, deployment should stop at the end of the
> > step.

I don't think anyone will argue against this use-case, we absolutely want
to enable a better "fail fast" for deployment problems, as well as better
surfacing of why it failed.

> > So there are 2 things in this user story:
> >
> > 1) Be able to run some service validation within a step deployment.
> > Note about the implementation: make the validation composable per
> > service (OVS, nova, etc) and not per role (compute, controller, etc).

+1, now we have composable services we need any validations to be
associated with the services, not the roles.

That said, it's fairly easy to imagine an interface like
step_config/config_settings could be used to wire in composable service
validations on a per-role basis, e.g similar to what we do here, but
per-step:

https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud.yaml#L1144

Similar to what was proposed (but never merged) here:

https://review.openstack.org/#/c/174150/15/puppet/controller-post-puppet.yaml

> > 2) Make this information readable and easy to access and understand
> > for our users.
> >
> > I have a proof-of-concept for 1) and partially 2), with the example of
> > OVS: https://review.openstack.org/#/c/342202/
> > This patch will make sure OVS is actually usable at step 4 by running
> > 'ovs-vsctl show' during the Puppet catalog and if it's working, it
> > will create a Puppet anchor. This anchor is currently not useful but
> > could be in future if we want to rely on it for orchestration.
> > I wrote the service validation in Puppet 2 years ago when doing Spinal
> > Stack with eNovance:
> > https://github.com/openstack/puppet-openstacklib/blob/master/manifests/service_validation.pp
> > I think we could re-use it very easily, it has been proven to work.
> > Also, the code is within our Puppet profiles, so it's by design
> > composable and we don't need to make any connection with our current
> > services with some magic. Validation will reside within Puppet
> > manifests.
> > If you look my PoC, this code could even live in puppet-vswitch itself
> > (we already have this code for puppet-nova, and some others).

I think having the validations inside the puppet implementation is OK, but
ideally I think we do want it to be part of the puppet modules themselves
(not part of the puppet-tripleo abstraction layer).

The issue I'd have with putting it in puppet-tripleo is that if we're going
to do this in a tripleo specific way, it should probably be done via a
method that's more config tool agnostic.  Otherwise we'll have to recreate
the same validations for future implementations (I'm thinking specifically
about containers here, and possibly ansible[1].

So, in summary, I'm +1 on getting this integrated if it can be done with
little overhead and it's something we can leverage via the puppet modules
vs puppet-tripleo.

> >
> > Ok now, what if validation fails?
> > I'm testing it here: https://review.openstack.org/#/c/342205/
> > If you look at /var/log/messages, you'll see:
> >
> > Error: 
> > /Stage[main]/Tripleo::Profile::Base::Neutron::Ovs/Openstacklib::Service_validation[openvswitch]/Exec[execute
> > openvswitch validation]/returns: change from notrun to 0 failed
> >
> > So it's pretty clear by looking at logs that openvswitch service
> > validation failed and something is wrong. You'll also notice in the
> > logs that deployed stopped at step 4 since OVS is not considered to
> > run.
> > It's partially addressing 2) because we need to make it more explicit
> > and readable. Dan Prince had the idea to use
> > https://github.com/ripienaar/puppet-reportprint to print a nice report
> > of Puppet catalog result (we haven't tried it yet). We could also use
> > Operational Tools later to monitor Puppet logs and find Service
> > validation failures.

This all sounds good, but we do need to think beyond the puppet
implementation, e.g how will we enable similar validations in a container
based deployment?

I remember SpinalStack also used serverspec, can you describe the
differences between using that tool (was it only used for post-deploy
validation of the whole server, not per-step validation?)

I'm just wondering if the overhead of integrating per-service vali

Re: [openstack-dev] [fuel] Capacity table

2016-07-27 Thread Dmitry Dmitriev
Hello Vitaly,

Thank you for this answer.
The main question here is the business logic.
Do we have to use new design or don’t.

With best regards, Dmitry

> On 26 Jul 2016, at 17:47, Vitaly Kramskikh  wrote:
> 
> Hi, Dmitry,
> 
> Your design seems to be similar to one of our attempts to fix this bug: 
> https://review.openstack.org/#/c/280737/ 
> . Though this fix was reverted, 
> because it led to the bug with a higher priority: 
> https://bugs.launchpad.net/fuel/+bug/1556909 
> . So your proposed design would 
> lead to reopening of this bug.
> 
> 2016-07-19 11:06 GMT+03:00 Dmitry Dmitriev  >:
> Hello All!
> 
> We have a very old bug about the Capacity table on the Dashboard tab of 
> environment in Fuel:
> 
> https://bugs.launchpad.net/fuel/+bug/1375750 
> 
> 
> Current design:
> 
> https://drive.google.com/open?id=0Bxi_JFs365mBNy1WT0xQT253SWc 
> 
> 
> It shows the full capacity (CPU/Memory/HDD) of all discovered by Fuel nodes.
> 
> New design: 
> 
> https://drive.google.com/open?id=0Bxi_JFs365mBaWZ0cUtla3N6aEU 
> 
> 
> It contains compute node CPU/Memory capacity and Ceph disk capacity only.
> 
> New design pros:
> - cloud administrator can easily estimate all available resources for cloud 
> instances
> 
> New design cons:
> - if cloud doesn’t use Ceph then HDD value is zero
> 
> What do you think about the new design?
> 
> With best regards, Dmitry
> 
> 
> 
> 
> -- 
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] ui-cookiecutter

2016-07-27 Thread Shuu Mutou
Hi everyone,

I have uploaded cookiecutter for dashboard plugin [1]. I'm happy if it will be 
your help, for creation of new plugin.

[1]: https://github.com/shu-mutou/ui-cookiecutter

The happiest case is that I donate this and the cookiecutter is maintained in 
Horizon, and the cookiecutter is used for plugin creation for functional test 
against plugin. Otherwise, should I talk with openstack-dev? Let me know your 
thoughts, please.

This cookiecutter is based on Magnum-UI (including a patch on reviewing), and 
Horizon on master branch (2016, 19th July). So the UI created from the 
cookiecutter doesn't have update function, but it has create/delete functions, 
uses registry service, default table and detail views.

Thanks,

Shu Muto

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Jul.27

2016-07-27 Thread joehuang
Hi, team,


IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.



There are two rounds of discussion in TC weekly meeting for Tricircle big-tent 
application, so let's discuss on this:



The agenda of this weekly meeting is:

# microversion support

# Concerns from Tricircle big tent application: 
https://review.openstack.org/#/c/338796/


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( joehuang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-common bugs, bug tracking and launchpad tags

2016-07-27 Thread Martin André
On Tue, Jul 19, 2016 at 5:20 PM, Steven Hardy  wrote:
> On Mon, Jul 18, 2016 at 12:28:10PM +0100, Julie Pichon wrote:
>> Hi,
>>
>> On Friday Dougal mentioned on IRC that he hadn't realised there was a
>> separate project for tripleo-common bugs on Launchpad [1] and that he'd
>> been using the TripleO main tracker [2] instead.
>>
>> Since the TripleO tracker is also used for client bugs (as far as I can
>> tell?), and there doesn't seem to be a huge amount of tripleo-common
>> bugs perhaps it would make sense to also track those in the main
>> tracker? If there is a previous conversation or document about bug
>> triaging beyond [3] I apologise for missing it (and would love a
>> URL!). At the moment it's a bit confusing.
>
> Thanks for raising this, yes there is a bit of a proliferation of LP
> projects, but FWIW the only one I'm using to track coordinated milestone
> releases for Newton is this one:
>
> https://launchpad.net/tripleo/
>
>> If we do encourage using the same bug tracker for multiple components,
>> I think it would be useful to curate a list of official tags [4]. The
>> main advantage of doing that is that the tags will auto-complete so
>> it'd be easier to keep them consistent (and thus actually useful).
>
> +1 I'm fine with adding tags, but I would prefer that we stopped adding
> more LP projects unless the associated repos aren't planned to be part of
> the coordinated release (e.g I don't have to track them ;)
>
>> Personally, I wanted to look through open bugs against
>> python-tripleoclient but people use different ways of marking them at
>> the moment - e.g. [tripleoclient] or [python-tripleoclient] or
>> tripleoclient (or nothing?) in the bug name. I tried my luck at adding
>> a 'tripleoclient' tag [5] to the obvious ones as an example. Maybe
>> something shorter like 'cli', 'common' would make more sense. If there
>> are other tags that come back regularly it'd probably be helpful to
>> list them explicitly as well.
>
> Sure, well I know that many python-*clients do have separate LP projects,
> but in the case of TripleO our client is quite highly coupled to the the
> other TripleO pieces, in particular tripleo-common.  So my vote is to
> create some tags in the main tripleo project and use that to filter bugs as
> needed.
>
> There are two projects we might consider removing, tripleo-common, which
> looks pretty much unused and tripleo-validations which was recently added
> by the sub-team working on validations.
>
> If folks find either useful then they can stay, but it's going to be easier
> to get a clear view on when to cut a release if we track everything
> considered part of the tripleo deliverable in one place IMHO.

The tripleo-validations issues on launchpad now live in the tripleo
bug tracker with the 'validations' tag.
I'm going to retire the tripleo-validation launchpad once I find how to do it.

Here's the relevant tripleo-validations patch:
https://review.openstack.org/347706

Thanks,
Martin

> Thanks,
>
> Steve
>
>>
>> Julie
>>
>> [1] https://bugs.launchpad.net/tripleo-common
>> [2] https://bugs.launchpad.net/tripleo
>> [3] https://wiki.openstack.org/wiki/TripleO#Bug_Triage
>> [4] https://wiki.openstack.org/wiki/Bug_Tags
>> [5] https://bugs.launchpad.net/tripleo?field.tag=tripleoclient
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Steve Hardy
> Red Hat Engineering, Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-ovn][networking-odl] Syncing neutron DB and OVN DB

2016-07-27 Thread Kevin Benton
> I'd like to see if we can solve the problems more generally.

We've tried before but we very quickly run into competing requirements with
regards to eventual consistency. For example, asynchronous background sync
doesn't work if someone wants their backend to confirm that port details
are acceptable (e.g. mac isn't in use by some other system outside of
openstack). Then each backend has different methods for detecting what is
out of sync (e.g. config numbers, hashes, or just full syncs on startup)
that each come with their own requirements for how much data needs to be
resent when an inconsistency is detected.

If we can come to some common ground of what is required by all of them,
then I would love to get some of this built into the ML2 framework.
However, we've discussed this at meetups/mid-cycles/summits and it
inevitably ends up with two people drawing furiously on a whiteboard,
someone crying in the corner, and everyone else arguing about the lack of
parametric polymorphism in Go.

Even between OVN and ODL in this thread, it sounds like the only thing in
common is a background worker that consumes from a queue of tasks in the
db. Maybe realistically the only common thing we can come up with is a
taskflow queue stored in the DB to solve the multiple workers issue...

On Tue, Jul 26, 2016 at 11:31 AM, Russell Bryant  wrote:

>
>
> On Fri, Jul 22, 2016 at 7:51 AM, Numan Siddique 
> wrote:
>
>> Thanks for the comments Amitabha.
>> Please see comments inline
>>
>> On Fri, Jul 22, 2016 at 5:50 AM, Amitabha Biswas 
>> wrote:
>>
>>> Hi Numan,
>>>
>>> Thanks for the proposal. We have also been thinking about this use-case.
>>>
>>> If I’m reading this accurately (and I may not be), it seems that the
>>> proposal is to not have any OVN NB (CUD) operations (R operations outside
>>> the scope) done by the api_worker threads but rather by a new journal
>>> thread.
>>>
>>>
>> Correct.
>> ​
>>
>>
>>> If this is indeed the case, I’d like to consider the scenario when there
>>> any N neutron nodes, each node with M worker threads. The journal thread at
>>> the each node contain list of pending operations. Could there be (sequence)
>>> dependency in the pending operations amongst each the journal threads in
>>> the nodes that prevents them from getting applied (for e.g.
>>> Logical_Router_Port and Logical_Switch_Port inter-dependency), because we
>>> are returning success on neutron operations that have still not been
>>> committed to the NB DB.
>>>
>>>
>> I
>> ​ts a valid scenario and should be designed properly to handle such
>> scenarios in case we take this approach.
>>
>
> ​I believe a new table in the Neutron DB is used to synchronize all of the
> journal threads.
> ​
> Also note that OVN currently has no custom tables in the Neutron database
> and it would be *very* good to keep it that way if we can.
>
>
>>
>> ​
>>
>>> Couple of clarifications and thoughts below.
>>>
>>> Thanks
>>> Amitabha 
>>>
>>> On Jul 13, 2016, at 1:20 AM, Numan Siddique  wrote:
>>>
>>> Adding the proper tags in subject
>>>
>>> On Wed, Jul 13, 2016 at 1:22 PM, Numan Siddique 
>>> wrote:
>>>
 Hi Neutrinos,

 Presently, In the OVN ML2 driver we have 2 ways to sync neutron DB and
 OVN DB
  - At neutron-server startup, OVN ML2 driver syncs the neutron DB and
 OVN DB if sync mode is set to repair.
  - Admin can run the "neutron-ovn-db-sync-util" to sync the DBs.

 Recently, in the v2 of networking-odl ML2 driver (Please see (1) below
 which has more details). (ODL folks please correct me if I am wrong here)

   - a journal thread is created which does the CRUD operations of
 neutron resources asynchronously (i.e it sends the REST APIs to the ODL
 controller).

>>>
>>> Would this be the equivalent of making OVSDB transactions to the OVN NB
>>> DB?
>>>
>>
>> ​Correct.
>> ​
>>
>>
>>>
>>>   - a maintenance thread is created which does some cleanup periodically
 and at startup does full sync if it detects ODL controller cold reboot.


 Few question I have
  - can OVN ML2 driver take same or similar approach. Are there any
 advantages in taking this approach ? One advantage is neutron resources can
 be created/updated/deleted even if the OVN ML2 driver has lost connection
 to the ovsdb-server. The journal thread would eventually sync these
 resources in the OVN DB. I would like to know the communities thoughts on
 this.

>>>
>>>
> ​I question whether making operations appear to be successful even when
> ovsdb-server is unreachable is a useful thing.  API calls fail today if the
> Neutron db is unreachable.  Why would we bend over backwards for the OVN
> database?
>
> If this was easy to do, sure, but this solution seems *incredibly* complex
> to me, so I see it as an absolute last resort.​
>
>
>
>> If we can make it work, it would indeed be a huge plus for system wide
>>> upgrades and some corner cases in the code (ACL specifically), where the
>>

Re: [openstack-dev] [tripleo] Modifying just a few values on overcloud redeploy

2016-07-27 Thread Steven Hardy
On Tue, Jul 26, 2016 at 05:23:21PM -0400, Adam Young wrote:
>I worked through how to do a complete clone of the templates to do a
>deploy and change a couple values here:
> 
>http://adam.younglogic.com/2016/06/custom-overcloud-deploys/
> 
>However, all I want to do is to set two config options in Keystone.  Is
>there a simple way to just modify the two values below?  Ideally, just
>making a single env file and passing it via openstack overcloud deploy -e
>somehow.
> 
>'identity/domain_specific_drivers_enabled': value => 'True';
> 
>'identity/domain_configurations_from_database': value => 'True';

Yes, the best way to do this is to pass a hieradata override, as documented
here:

http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html

First step is to look at the puppet module that manages that configuration,
in this case I assume it's puppet-keystone:

https://github.com/openstack/puppet-keystone/tree/master/manifests

Some grepping shows that domain_specific_drivers_enabled is configured
here:

https://github.com/openstack/puppet-keystone/blob/master/manifests/init.pp#L1124..L1155

So working back from those variables, "using_domain_config" and
"domain_config_directory", you'd create a yaml file that looks like:

parameter_defaults:
  ControllerExtraConfig:
keystone::using_domain_config: true
keystone::domain_config_directory: /path/to/config

However, it seems that you want to configure domain_specific_drivers_enabled
*without* configuring domain_config_directory, so that it comes from the
database?

In that case, puppet has a "passthrough" interface you can use (this is the
same for all openstack puppet modules AFAIK):

https://github.com/openstack/puppet-keystone/blob/master/manifests/config.pp

Environment (referred to as controller_extra.yaml below) file looks like:

parameter_defaults:
  ControllerExtraConfig:
keystone::config::keystone_config:
  identity/domain_specific_drivers_enabled:
value: true
  identity/domain_configurations_from_database:
value: true

Note the somewhat idiosyncratic syntax, you pass the value via a
"value: foo" map, not directly to the configuration key (don't ask me why!)

Then do openstack overcloud deploy --templates /path/to/templates -e 
controller_extra.yaml

The one gotcha here is if puppet keystone later adds an explicit interface
which conflicts with this, e.g a domain_specific_drivers_enabled variable
in the above referenced init.pp, you will get a duplicate definition error
(because you can't define the same thing twice in the puppet catalog).

This means that long-term use of the generic keystone::config::keystone_config
interface can be fragile, so it's best to add an explicit e.g
keystone::domain_specific_drivers_enabled interface if this is a long-term
requirement.

This is probably something we should add to our docs, I'll look at doing
that.

Hope that helps,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] repo split

2016-07-27 Thread Martin André
On Thu, Jul 21, 2016 at 5:21 PM, Steven Dake (stdake)  wrote:
> I am voting -1 for now, but would likely change my vote after we branch
> Newton.  I'm not a super big fan of votes way ahead of major events (such
> as branching) because a bunch of things could change between now and then
> and the vote would be binding.
>
> Still community called the vote - so vote stands :)

IIUC, if split there is, it's scheduled for when we branch out Newton
which is only 1 month ahead.

I'm +1 on splitting ansible deployment code into kolla-ansible.

Martin

> Regards
> -steve
>
>
> On 7/20/16, 1:48 PM, "Ryan Hallisey"  wrote:
>
>>Hello.
>>
>>The repo split discussion that started at summit was brought up again at
>>the midcycle.
>>The discussion was focused around splitting the Docker containers and
>>Ansible code into
>>two separate repos [1].
>>
>>One of the main opponents to the split is backports.  Backports will need
>>to be done
>>by hand for a few releases.  So far, there hasn't been a ton of
>>backports, but that could
>>always change.
>>
>>As for splitting, it provides a much clearer view of what pieces of the
>>project are where.
>>Kolla-ansible with its own repo will sit along side kolla-kubernetes as
>>consumers of the
>>kolla repo.
>>
>>The target for the split will be for day 1 of Occata. The core team will
>>vote on
>>the change of splitting kolla into kolla-ansible and kolla.
>>
>>Cores please respond with a +1/-1 to approve or disapprove the repo
>>split. Any community
>>member feel free to weigh in with your opinion.
>>
>>+1
>>-Ryan
>>
>>[1] - https://etherpad.openstack.org/p/kolla-N-midcycle-repo-split
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.4.0

2016-07-27 Thread Michał Dulko
On 07/27/2016 12:39 AM, James E. Blair wrote:
> Announcing Gertty 1.4.0
> ===
>
> Gertty is a console-based interface to the Gerrit Code Review system.
>
> Gertty is designed to support a workflow similar to reading network
> news or mail.  It syncs information from Gerrit to local storage to
> support disconnected operation and easy manipulation of local git
> repos.  It is fast and efficient at dealing with large numbers of
> changes and projects.
>
> The full README may be found here:
>
>   https://git.openstack.org/cgit/openstack/gertty/tree/README.rst
>
> Changes since 1.3.0:
> 
>
> 

Just wondering - were there tries to implement syntax highlighting in
diff view? I think that's the only thing that keeps me from switching to
Gertty.

Thanks,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][vitrage] some compress error when deploy OS

2016-07-27 Thread Afek, Ifat (Nokia - IL)
Hi,

I’m glad that you found a workaround.
Someone from vitrage-dashboard team should look at the specific error and tell 
you what is the best solution.

Best Regards,
Ifat.


From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
Sent: Wednesday, July 27, 2016 9:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [devstack][vitrage] some compress error when 
deploy OS

I've encountered the same error while installing openstack with devstack.

It seems to be caused by an issue in vitrage dashboard plugin.

The relative link in dagre-d3 demo pages can not be resolved in django 
compressor.



https://github.com/openstack/vitrage-dashboard/blob/master/vitragedashboard/static/vendor/dagre-d3/demo/arrows.html#L8

Currently we can make a work around by removing the demo pages from the python 
lib.

$ sudo mv 
/opt/stack/vitrage-dashboard/vitragedashboard/static/vendor/dagre-d3/demo/ 
dagre-d3-demo.backup

Could vitrage and devstack team have a look at it?

--
Yujun Zhang

On Fri, Jul 22, 2016 at 11:04 AM 
mailto:dong.wenj...@zte.com.cn>> wrote:

Hi all,

When i use devstack to deploy a OS env, it raise the error.
The log is as follows.
Does anybody know how to resolve this problem? Thank you!~

12 static files copied to '/opt/stack/horizon/static', 1708 unmodified.
+lib/horizon:init_horizon:152  
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings
+lib/horizon:init_horizon:152  django-admin compress --force
Found 'compress' tags in:
/opt/stack/horizon/openstack_dashboard/templates/horizon/_scripts.html
/opt/stack/horizon/openstack_dashboard/templates/horizon/_conf.html
/opt/stack/horizon/openstack_dashboard/templates/_stylesheets.html
Compressing... CommandError: An error occurred during rendering 
/opt/stack/horizon/openstack_dashboard/templates/horizon/_scripts.html: 
'\"../build/dagre-d3.js\"' isn't accessible via COMPRESS_URL 
('/dashboard/static/') and can't be compressed
+lib/horizon:init_horizon:1exit_trap
+./stack.sh:exit_trap:480  local r=1


BR,
dwj

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Support for bay rollback may break magnum API backward compatibility

2016-07-27 Thread Wenzhi Yu (yuywz)
Hi folks,

I am working on a patch [1] to add bay rollback machanism on update failure. 
But it seems to break magnum API
backward compatibility.

I'm not sure how to deal with this, can you please give me your suggestion? 
Thanks!

[1]https://review.openstack.org/#/c/343478/

2016-07-27



Best Regards,
Wenzhi Yu (yuywz)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][Heat][Tacker][Murano][App-Catalog] [Glance] How to validate your binary data in OpenStack

2016-07-27 Thread Flavio Percoco

On 25/07/16 20:03 -0400, Nikhil Komawar wrote:

Thanks for your nice message Mikhail.


I, however, wanted to address a small correction to avoid any further
presumptions about Glare, Glance and Images API with the tags associated
in the email subject herewith. (also adding Glance tag to the list to
ensure we reach the appropriate Images related audience).


Allow me to further nitpick on the tags. It'd be better to just use Glance as
tag for these emails and make Glare just part of the rest of the subject. This
disctinction is important as these tags are used to filter emails and Glare is
not really an independent project but part of Glance.

(sorry for the nitpick but I thought I'd take this chance to share it)
Keep it up, folks.
Flavio



On 7/25/16 2:26 PM, Mikhail Fedosin wrote:

Hello! Today I want to discuss with community one good feature in
Glare - artifact validation. In short Glare allows to validate binary
data before it's uploaded to store. For example, for Tosca we're able
to check if uploaded yaml is a valid template [1], for vm images we
can test their integrity. For sure, Glare supports quite sophisticated
workflows, like sending murano packages to external CI

While this feature looks nothing less than excellent, it is unfortunate
that the Images API is built into Glance -- meaning Glance will remain
the reference implementation of the OpenStack Images API for the near to
long future. So, while it may be possible to test integrity of data
assets, it won't be possible for any operator/user/API-consumer to use
Glare for Images as Glance will remain the whole and sole API for Images
and all future features need to be implemented therein.

The reasons for this have been discussed briefly in the proposal
(review) of the Glare spec and in related conversations. If anyone needs
more info, please reach out.


or validate Heat templates with given environments.

So, I want to think out what validation is exactly required from Glare
and how we can help related projects to succeed, checking and reliably
storing their binary assets.

Best regards,
Mikhail Fedosin

[1]
https://review.openstack.org/#/c/337633/10/contrib/glare/openstack_app_catalog/artifacts.py@159







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [UI] Version

2016-07-27 Thread Honza Pokorny
Hello folks,

As the tripleo-ui project is quickly maturing, it might be time to start
versioning our code.  As of now, the version is set to 0.0.1 and that
hardly reflects the state of the project.

What do you think?

Honza Pokorny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Heat][Horizon] Glance v2 and custom locations

2016-07-27 Thread Flavio Percoco

On 26/07/16 15:45 +, Fox, Kevin M wrote:

The app catalog has suffered this change too. We had to force v1 in our 
suggested download cli lines to make it work when newer clients defaulted to v2 
and the previously working command line switches suddenly vanished.

As I understand, v2 had a solution to it, but that solution too was deprecated. 
I've heard rumor of a new suggested way of doing it, but I haven't been able to 
find it, so I guess its still cooking.

I'd ask the Glance team to not deprecate v1 until this issue is resolved, as it 
is a very common use case for Glance. I understand the desire to sluff off the 
old and only support a single, new api. But the new api has a big gap in it 
that needs to be fixed first.


To be honest, I don't think this is a blocker for the v1 deprecation. The v1
deprecation has been postponed long enough and I don't think this is a real
blocker. There is indeed a way to allow for services to add locations, which is
enabling it in the config file. I don't mean to oversimplify the work of
changing config files and the cost this has from an operations perspective but
it's not blocking other projects. This option can be set to True in the gate if
necessary.

That being said, I do think we can do better here and that the current state is
not ideal to everyone. I've attempted several times to change this situation but
I believe it was not the right time *shrugs*

Flavio


Thanks,
Kevin

From: Mikhail Fedosin [mfedo...@mirantis.com]
Sent: Tuesday, July 26, 2016 4:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Glance][Heat][Horizon] Glance v2 and custom locations

Hello!

As you may know glance v1 is going to be deprecated in Newton cycle. Almost all 
projects support glance v2 at this moment, Nova uses it by default. Only one 
thing that blocks us from complete adoption is a possibility to set custom 
locations to images. In v1 any user can set a location to his image, but in v2 
this functionality is not allowed by default, which prevents v2 adoption in 
services like Horizon or Heat.

It all happens because of differences between v1 and v2 locations. In v1 it is 
pretty easy - user specifies an url and send a request, glance adds this url to 
the image and activates it.
In v2 things are more complicated: v2 supports multiple locations per image, 
which means that when user wants to download image file glance will choose the 
best one from the list of locations. It leads to some inconsistencies: user can 
add or delete locations from his image even if it is active.

To enable adding custom locations operator has to set True to config option 
'show_multiple_locations'. After that any user will be able to add or remove 
his image locations, update locations metadata, and finally see locations of 
all images even if they were uploaded to local storage. All this things are not 
desired if glance v2 has public interface, because it exposes inner cloud 
architecture. It leads to the fact that Heat and Horizon and Nova in some cases 
and other services that used to set custom locations in glance v1 won't be able 
to adopt glance v2. Unfortunately, removing this behavior in v2 isn't easy, 
because it requires serious architecture changes and breaks API. Moreover, many 
vendors use these features in their clouds for private glance deployments and 
they really won't like if we break anything.

So, I want to hear opinions from Glance community and other involved people.

Best regards,
Mikhail Fedosin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Heat][Horizon] Glance v2 and custom locations

2016-07-27 Thread Flavio Percoco

On 26/07/16 14:32 +0300, Mikhail Fedosin wrote:

Hello!

As you may know glance v1 is going to be deprecated in Newton cycle. Almost
all projects support glance v2 at this moment, Nova uses it by default.
Only one thing that blocks us from complete adoption is a possibility to
set custom locations to images. In v1 any user can set a location to his
image, but in v2 this functionality is not allowed by default, which
prevents v2 adoption in services like Horizon or Heat.

It all happens because of differences between v1 and v2 locations. In v1 it
is pretty easy - user specifies an url and send a request, glance adds this
url to the image and activates it.
In v2 things are more complicated: v2 supports multiple locations per
image, which means that when user wants to download image file glance will
choose the best one from the list of locations. It leads to some
inconsistencies: user can add or delete locations from his image even if it
is active.

To enable adding custom locations operator has to set True to config option
'show_multiple_locations'. After that any user will be able to add or
remove his image locations, update locations metadata, and finally see
locations of all images even if they were uploaded to local storage. All
this things are not desired if glance v2 has public interface, because it
exposes inner cloud architecture. It leads to the fact that Heat and
Horizon and Nova in some cases and other services that used to set custom
locations in glance v1 won't be able to adopt glance v2. Unfortunately,
removing this behavior in v2 isn't easy, because it requires serious
architecture changes and breaks API. Moreover, many vendors use these
features in their clouds for private glance deployments and they really
won't like if we break anything.

So, I want to hear opinions from Glance community and other involved people.


I agree the current situation is not ideal but I don't think there's a perfect
solution that will let other services magically use the location's
implementation in v2. The API itself is different and it requires a different
call.

With that in mind, I think the right thing to do here is to get rid of that
option[0] and let operators manage this through poilicies. This does not mean
the policies available are perfect.

I'm not an expert on service tokens but I think we said that we could probably
just use service tokens to allow for this feature to be used by other services
instead of keeping it wide open everywhere.

While I don't think the current situation is ideal, I think it's better than
keeping it wide open.

Hope the above helps,
Flavio

[0] https://review.openstack.org/#/c/313936/



Best regards,
Mikhail Fedosin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Michael Still
On Tue, Jul 26, 2016 at 4:44 PM, Fox, Kevin M  wrote:

[snip]

The issue is, as I see it, a parallel activity to one of the that is
> currently accepted into the Big Tent, aka Containerized Deployment


[snip]

This seems to be the crux of the matter as best as I can tell. Is it true
to say that the concern is that Kolla believes they "own" the containerized
deployment space inside the Big Tent?

Whether to have competing projects in the big tent was debated by the TC at
the time and my recollection is that we decided that was a good thing -- if
someone wanted to develop a Nova replacement, then let them do it in public
with the community. It would either win or lose based on its merits. Why is
this not something which can happen here as well?

I guess I should also point out that there is at least one other big tent
deployment tool deploying containerized openstack components now, so its
not like this idea is unique or new. Perhaps using kubernetes makes it
different somehow, but I don't see it.

Michael




-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Independent tag and stable branches

2016-07-27 Thread Julien Danjou
On Tue, Jul 26 2016, Doug Hellmann wrote:

> We have not yet automated the process of creating stable branches. When
> we do, it's likely to first apply to the cycle-based release models,
> since handling those branches is clearer.
>
> If you don't have permission to create the branch yourself, drop by
> #openstack-release and I'll help you with it.

Totally makes sense! Thanks Doug and Tony, that's perfect. :)

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Independent tag and stable branches

2016-07-27 Thread Julien Danjou
On Tue, Jul 26 2016, Jeremy Stanley wrote:

> You already have a number of existing stable/x.y branches so I'm
> curious how this has worked for you in practice up to this point.
> Skimming those branches I think you've simply gotten lucky and
> haven't run into the issue yet because most have received no
> backports at all and the handful of changes that have been
> backported are usually within the first few days to a month after
> branching while still fairly close to the master branch state (after
> which point those branches have gone stagnant).

If I understand correctly, I think this is what we do to solve what you
describe:

  
https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/gnocchi.yaml#n25

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Steven Dake (stdake)
Michael,

Response inline.

From: Michael Still mailto:mi...@stillhq.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 27, 2016 at 5:30 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting 
Fuel CCP (docker/k8s) kicked off

On Tue, Jul 26, 2016 at 4:44 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

[snip]

The issue is, as I see it, a parallel activity to one of the that is currently 
accepted into the Big Tent, aka Containerized Deployment

[snip]

This seems to be the crux of the matter as best as I can tell. Is it true to 
say that the concern is that Kolla believes they "own" the containerized 
deployment space inside the Big Tent?

I can't give you Kevin's thinking on this, but my thinking is that every 
project has a right to innovate even if it means competing with an established 
project.  Even if that competition involves a straight up fork or serious copy 
and paste from the competitive project.These are permitted things in big 
tent.  Kolla has been forked a few times with people seeding competitive 
projects.  The license permits this, and fwiw I don't see any problem with it.  
There is nothing more appealing to an engineer then forking a code base for 
whatever reason.  Hence I disagree about your assertion that competition is the 
crux of the matter.

It is easier to copy a successful design then to innovate your own the hard way.

I have already stated where the problem is, and I'll state it once again using 
C&P:

"
Given the strong language around partnership between Intel, Mirantis, and
Google in that press release, and the activity in the review queue (2
pages of outstanding reviews) it seems clear to me that the intent is for
this part of Fuel to participate in the big tent.  The right thing to do
here is for fuel-ccp to submit their repos to TC oversight by adding them
to the official project list.

Fuel requires a mission change, or it may be perceived that Fuel itself
does not adhere to the Four Opens [1] specifically Open Development and
Open Community.
"

[snip]


Michael




--
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-common bugs, bug tracking and launchpad tags

2016-07-27 Thread Dougal Matthews
On 19 July 2016 at 16:20, Steven Hardy  wrote:

> On Mon, Jul 18, 2016 at 12:28:10PM +0100, Julie Pichon wrote:
> > Hi,
> >
> > On Friday Dougal mentioned on IRC that he hadn't realised there was a
> > separate project for tripleo-common bugs on Launchpad [1] and that he'd
> > been using the TripleO main tracker [2] instead.
> >
> > Since the TripleO tracker is also used for client bugs (as far as I can
> > tell?), and there doesn't seem to be a huge amount of tripleo-common
> > bugs perhaps it would make sense to also track those in the main
> > tracker? If there is a previous conversation or document about bug
> > triaging beyond [3] I apologise for missing it (and would love a
> > URL!). At the moment it's a bit confusing.
>
> Thanks for raising this, yes there is a bit of a proliferation of LP
> projects, but FWIW the only one I'm using to track coordinated milestone
> releases for Newton is this one:
>
> https://launchpad.net/tripleo/
>
> > If we do encourage using the same bug tracker for multiple components,
> > I think it would be useful to curate a list of official tags [4]. The
> > main advantage of doing that is that the tags will auto-complete so
> > it'd be easier to keep them consistent (and thus actually useful).
>
> +1 I'm fine with adding tags, but I would prefer that we stopped adding
> more LP projects unless the associated repos aren't planned to be part of
> the coordinated release (e.g I don't have to track them ;)
>
> > Personally, I wanted to look through open bugs against
> > python-tripleoclient but people use different ways of marking them at
> > the moment - e.g. [tripleoclient] or [python-tripleoclient] or
> > tripleoclient (or nothing?) in the bug name. I tried my luck at adding
> > a 'tripleoclient' tag [5] to the obvious ones as an example. Maybe
> > something shorter like 'cli', 'common' would make more sense. If there
> > are other tags that come back regularly it'd probably be helpful to
> > list them explicitly as well.
>
> Sure, well I know that many python-*clients do have separate LP projects,
> but in the case of TripleO our client is quite highly coupled to the the
> other TripleO pieces, in particular tripleo-common.  So my vote is to
> create some tags in the main tripleo project and use that to filter bugs as
> needed.
>
> There are two projects we might consider removing, tripleo-common, which
> looks pretty much unused and tripleo-validations which was recently added
> by the sub-team working on validations.
>

I agree with retiring these and I'd also like to add tripleo-workflows to
the
list for consideration, it has been created but hasn't yet been used as far
as I can tell.

Sorry the the late reply. I'm glad this was brought up, it was on my mental
todo list. It should make things clearer internally and also for users less
familiar with the project that want to report bugs.


If folks find either useful then they can stay, but it's going to be easier
> to get a clear view on when to cut a release if we track everything
> considered part of the tripleo deliverable in one place IMHO.
>
> Thanks,
>
> Steve
>
> >
> > Julie
> >
> > [1] https://bugs.launchpad.net/tripleo-common
> > [2] https://bugs.launchpad.net/tripleo
> > [3] https://wiki.openstack.org/wiki/TripleO#Bug_Triage
> > [4] https://wiki.openstack.org/wiki/Bug_Tags
> > [5] https://bugs.launchpad.net/tripleo?field.tag=tripleoclient
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Steve Hardy
> Red Hat Engineering, Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-07-27 Thread Dougal Matthews
On 27 July 2016 at 12:41, Honza Pokorny  wrote:

> Hello folks,
>
> As the tripleo-ui project is quickly maturing, it might be time to start
> versioning our code.  As of now, the version is set to 0.0.1 and that
> hardly reflects the state of the project.
>
> What do you think?
>

Yup, Sounds good to me! I would suggest that we make the Newton
release 1.0 and then continue from there. I am not sure what the
normal pattern is tho'


Honza Pokorny
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-07-27 Thread Steven Hardy
On Wed, Jul 27, 2016 at 08:41:32AM -0300, Honza Pokorny wrote:
> Hello folks,
> 
> As the tripleo-ui project is quickly maturing, it might be time to start
> versioning our code.  As of now, the version is set to 0.0.1 and that
> hardly reflects the state of the project.
> 
> What do you think?

I would like to see it released as part of the coordinated tripleo release,
e.g tagged each milestone along with all other projects where we assert the
release:cycle-with-intermediary tag:

https://github.com/openstack/governance/blob/master/reference/projects.yaml#L4448

Because tripleo-ui isn't yet fully integrated with TripleO (e.g packaging,
undercloud installation and CI testing), we've not tagged it in the last
two milestone releases, but perhaps we can for the n-3 release?

https://review.openstack.org/#/c/324489/

https://review.openstack.org/#/c/340350/

When we do that, the versioning will align with all other TripleO
deliverables, solving the problem of the 0.0.1 version?

The steps to achieve this are:

1. Get per-commit builds of tripleo-ui working via delorean-current:

https://trunk.rdoproject.org/centos7-master/current/

2. Get the tripleo-ui package installed and configured as part of the
undercloud install (via puppet) - we might want to add a conditional to the
undercloud.conf so it's configurable (enabled by default?)

https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.pp

3. Get the remaining Mistral API pieces landed so it's fully functional

4. Implement some basic CI smoke tests to ensure the UI is at least
accessible.

Does that sequence make sense, or have I missed something?

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-07-27 Thread Steven Hardy
On Wed, Jul 27, 2016 at 02:08:08PM +0100, Dougal Matthews wrote:
>On 27 July 2016 at 12:41, Honza Pokorny  wrote:
> 
>  Hello folks,
> 
>  As the tripleo-ui project is quickly maturing, it might be time to start
>  versioning our code.  As of now, the version is set to 0.0.1 and that
>  hardly reflects the state of the project.
> 
>  What do you think?
> 
>Yup, Sounds good to me! I would suggest that we make the Newton
>release 1.0 and then continue from there. I am not sure what the
>normal pattern is tho'

No, please don't invent an independent versioning scheme.  tripleo UI
should be a tripleo deliverable, and part of the coordinated release (see
my reply to Honza).

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Setting kernel args to overcloud nodes

2016-07-27 Thread Saravanan KR
Hello,

We are working on SR-IOV & DPDK tripleo integration. In which, setting
the kernel args for huge pages, iommu and cpu isolation is required.
Earlier we were working on setting of kernel args via IPA [1], reasons
being:
1. IPA is installing the boot loader on the overcloud node
2. Ironic knows the hardware spec, using which, we can target specific
args to nodes via introspection rules

As the proposal is to change the image owned file '/etc/default/grub',
it has been suggested by ironic team to use the instance user data to
set the kernel args [2][3], instead of IPA. In the suggested approach,
we are planning to update the file /etc/default/grub, update
/etc/grub2.cfg and then issue a reboot. Reboot is mandatory because,
os-net-config will configure the DPDK bridges and ports by binding the
DPDK driver, which requires kernel args should be set for iommu and
huge pages.

As discussed on the IRC tripleo meeting, we need to ensure that the
user data with update of kernel args, does not overlap with any other
puppet configurations. Please let us know if you have any comments on
this approach.

Regards,
Saravanan KR

[1] https://review.openstack.org/#/c/331564/
[2] 
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#appending-kernel-parameters-to-boot-instances
[3] 
http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html#firstboot-extra-configuration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] mascot/logo ideas

2016-07-27 Thread Emilien Macchi
Looking at the poll we have 4 votes for the wolf, I guess we can go
with it. Tardigrade looses by 2 votes.

If anyone is against this vote please raise your hand now :-)

On Tue, Jul 26, 2016 at 4:26 PM, Emilien Macchi  wrote:
> We have 6 votes in total, results are:
>
> 2 votes for wolf.
> 2 votes for  tardigrade - https://en.wikipedia.org/wiki/Tardigrade
> 1 vote for axolotl - https://en.wikipedia.org/wiki/Axolotl
> 1 vote for dog puppet:
> https://img1.etsystatic.com/000/0/5613081/il_fullxfull.241000707.jpg
>
> Sounds like we haven't reached a consensus and failed to get more
> votes... I would propose to either report our vote or cancel and
> choose no mascot.
> Thoughts?
>
> On Mon, Jul 25, 2016 at 10:27 PM, Emilien Macchi  wrote:
>> Hi,
>>
>> So we have until July 27th to take the decision about our mascot.
>> If you are interested to vote, please add +1 on the proposals on the
>> etherpad [1].
>>
>> By Wednesday, we'll take the one with the most of +1
>>
>> Thanks,
>>
>> [1] https://etherpad.openstack.org/p/puppet-openstack-mascot-logo
>>
>> On Tue, Jul 12, 2016 at 11:23 AM, Emilien Macchi  wrote:
>>> Hey,
>>>
>>> During the meeting we decided to use etherpad to submit new ideas for
>>> our mascot / logo [1]:
>>> https://etherpad.openstack.org/p/puppet-openstack-mascot-logo
>>>
>>> Feel free to use your imagination as long you stay SFW :-)
>>>
>>> Thanks,
>>>
>>> [1] http://osdir.com/ml/openstack-dev/2016-07/msg00456.html
>>> --
>>> Emilien Macchi
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Support for bay rollback may break magnum API backward compatibility

2016-07-27 Thread Ton Ngo

Hi Wenzhi,
 Looks like you are adding the new --rollback option to bay-update.  If
the user does not specify this new option,
then bay-update behaves the same as before;  in other words, if it fails,
then the state of the bay will be left
in the partially updated mode.  Is this correct?  If so, this does change
the API, but does not seem to break
backward compatibility.
Ton Ngo,



From:   "Wenzhi Yu (yuywz)" 
To: "openstack-dev" 
Date:   07/27/2016 04:13 AM
Subject:[openstack-dev] [magnum] Support for bay rollback may break
magnum  API backward compatibility



 Hi folks,

 I am working on a patch [1] to add bay rollback machanism on update
 failure. But it seems to break magnum API
 backward compatibility.

 I'm not sure how to deal with this, can you please give me your
 suggestion? Thanks!

 [1]https://review.openstack.org/#/c/343478/

 2016-07-27

 Best Regards,
 Wenzhi Yu (yuywz)
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-07-27 Thread Honza Pokorny

On 2016-07-27 14:18, Steven Hardy wrote:
> On Wed, Jul 27, 2016 at 08:41:32AM -0300, Honza Pokorny wrote:
> > Hello folks,
> > 
> > As the tripleo-ui project is quickly maturing, it might be time to start
> > versioning our code.  As of now, the version is set to 0.0.1 and that
> > hardly reflects the state of the project.
> > 
> > What do you think?
> 
> I would like to see it released as part of the coordinated tripleo release,
> e.g tagged each milestone along with all other projects where we assert the
> release:cycle-with-intermediary tag:
> 
> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L4448
> 
> Because tripleo-ui isn't yet fully integrated with TripleO (e.g packaging,
> undercloud installation and CI testing), we've not tagged it in the last
> two milestone releases, but perhaps we can for the n-3 release?
> 
> https://review.openstack.org/#/c/324489/
> 
> https://review.openstack.org/#/c/340350/
> 
> When we do that, the versioning will align with all other TripleO
> deliverables, solving the problem of the 0.0.1 version?

Yes, this sounds great.

> 
> The steps to achieve this are:
> 
> 1. Get per-commit builds of tripleo-ui working via delorean-current:
> 
> https://trunk.rdoproject.org/centos7-master/current/

Patch for per-commit builds:

https://review.openstack.org/#/c/343834/

> 2. Get the tripleo-ui package installed and configured as part of the
> undercloud install (via puppet) - we might want to add a conditional to the
> undercloud.conf so it's configurable (enabled by default?)
> 
> https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.pp
> 
> 3. Get the remaining Mistral API pieces landed so it's fully functional
> 
> 4. Implement some basic CI smoke tests to ensure the UI is at least
> accessible.
> 
> Does that sequence make sense, or have I missed something?

This makes sense to me.

> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptl][requirements] nomination period started

2016-07-27 Thread Matthew Thode
We've started a period of self nomination in preparation for the
requirements project fully moving into project (as it's still under Doug
Hellmann).

We are gathering the self nominations here before we vote next week.
https://etherpad.openstack.org/p/requirements-ptl-newton

Nominees should also send an email to the openstack-dev list.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][requirements] nomination period started

2016-07-27 Thread Matthew Thode
On 07/27/2016 08:41 AM, Matthew Thode wrote:
> We've started a period of self nomination in preparation for the
> requirements project fully moving into project (as it's still under Doug
> Hellmann).
> 
> We are gathering the self nominations here before we vote next week.
> https://etherpad.openstack.org/p/requirements-ptl-newton
> 
> Nominees should also send an email to the openstack-dev list.
> 

And here's my self nomination email

I originally joined as a packager as we use requirements for some
dependency definitions.  I have since moved on to work on making
requirements changes verify changes against outside projects so as to
not break things once merged (better testing). other than that, just
keeping up on the queue :D

If you have any questions or concerns feel free to ask.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][requirements] nomination period started

2016-07-27 Thread Davanum Srinivas
w00t!! Glad to see this happen.

-- Dims

On Wed, Jul 27, 2016 at 9:41 AM, Matthew Thode
 wrote:
> We've started a period of self nomination in preparation for the
> requirements project fully moving into project (as it's still under Doug
> Hellmann).
>
> We are gathering the self nominations here before we vote next week.
> https://etherpad.openstack.org/p/requirements-ptl-newton
>
> Nominees should also send an email to the openstack-dev list.
>
> --
> -- Matthew Thode (prometheanfire)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-test-cloud-rh2 local mirror server

2016-07-27 Thread Derek Higgins
On 21 July 2016 at 23:04, Paul Belanger  wrote:
> Greetings,
>
> I write today to see how I can remove this server from 
> tripleo-test-cloud-rh2. I
> have an open patch[1] currently to migrate tripleo-ci to use our AFS mirrors 
> for
> centos and epel.  However, I'm still struggling to see what else you are using
> the local mirror for.
>
> From what I see, there appears to be some puppet modules in the mirror?
>
> The reason I am doing this work, is to help bring tripleo inline with
> openstack-infra tooling.  There shouldn't be the need for a project to 
> maintain
> its own infrastructure outside of openstack-infra.  If so, I see that as some
> sort of a failure between the project and openstack-infra.   And with that in
> mind, I am here to help fix that.
>
> For the most part, I think we have everything currently in place to migrate 
> away
> from your locally mirror. I just need some help figuring what else is left and
> then delete it.

Hi Paul,
The mirror server hosts 3 sets of data used in CI long with a cron
a job aimed at promoting trunk repositories,
The first you've already mentioned, there is a list of puppet modules
hosted here, we soon hope to move to packaged puppet modules so the
need for this will go away.

The second is a mirror of the centos cloud images, these are updated
hourly by the centos-cloud-images cronjob[1], I guess these could be
easily replaced with the AFS server

Then we come to the parts where it will probably be more tricky to
move away from our own server

o cached images - our nightly periodic jobs run tripleo ci with
master/HEAD for all openstack projects (using the most recent rdo
trunk repository), if the jobs pass then we upload the overcloud-full
and ipa images to the mirror server along with logging what jobs
passed, this happens at the end of toci_instack.sh[2], nothing else
happens at this point the files are just uploaded nothing starts using
them yet.

o promote script - hourly we then run the promote script[3], this
script is whats responsible for the promotion of the master rdo
repository that is used by tripleo ci (and devs), it checks to see if
images have been updated to the mirror server by the periodic jobs,
and if all of the jobs we care about (currently
periodic-tripleo-ci-centos-7-ovb-ha
periodic-tripleo-ci-centos-7-ovb-nonha[4]) passed then it does 2
things
  1. updates the current-tripleo link on the mirror server[5]
  2. updates the current-tripleo link on the rdo trunk server[6]
By doing this we ensure that the the current-tripleo link on the rdo
trunk server is always pointing to something that has passed tripleo
ci jobs, and that tripleo ci is using cached images that were built
using this repository

We've had to run this promote script on the mirror server as the
individual jobs run independently and in oder to make the promote
decision we needed somewhere that is aware of the status of all the
jobs

Hope this answers your questions,
Derek.

[1] - 
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/scripts/mirror-server/mirror-server.pp#n40
[2] - 
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_instack.sh#n198
[3] - 
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/scripts/mirror-server/promote.sh
[4] - 
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/scripts/mirror-server/mirror-server.pp#n51
[5] - http://8.43.87.241/builds/current-tripleo/
[6] - 
http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/

>
> [1] https://review.openstack.org/#/c/326143/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Project mascot

2016-07-27 Thread Fawaz Mohammed
I suggest horse, fast animal, faster deployment :) .

On Wed, Jul 20, 2016 at 10:39 PM, Billy Olsen  wrote:

> I like the idea of the Kraken...
>
> though I think I like the giant squid over an octopus, but either one
> is in the same vein :-)
>
> On Mon, Jul 18, 2016 at 1:27 AM, James Page  wrote:
> > Hi All
> >
> > As an approved project, we need to provide some ideas for a project
> mascot
> > for the Charms project (see [0]).
> >
> > Some suggestions as discussed on IRC:
> >
> > 1) cobra ('[snake] charming openstack') - which aligns with the Juju
> logo a
> > little.
> > 2) kraken ('many armed animal managing openstack') - but I think that
> falls
> > into mythical creatures so its probably excluded so maybe octopus
> instead?
> >
> > It would be nice to have one or two more ideas - any suggestions?
> >
> > Cheers
> >
> > James
> >
> > [0] http://www.openstack.org/project-mascots
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Capacity table

2016-07-27 Thread Vitaly Kramskikh
If "new design" is the design you've proposed in the original email, then
no - we'll have to reopen the bug and then revert the change again.

2016-07-27 11:36 GMT+03:00 Dmitry Dmitriev :

> Hello Vitaly,
>
> Thank you for this answer.
> The main question here is the business logic.
> Do we have to use new design or don’t.
>
> With best regards, Dmitry
>
> On 26 Jul 2016, at 17:47, Vitaly Kramskikh 
> wrote:
>
> Hi, Dmitry,
>
> Your design seems to be similar to one of our attempts to fix this bug:
> https://review.openstack.org/#/c/280737/. Though this fix was reverted,
> because it led to the bug with a higher priority:
> https://bugs.launchpad.net/fuel/+bug/1556909. So your proposed design
> would lead to reopening of this bug.
>
> 2016-07-19 11:06 GMT+03:00 Dmitry Dmitriev :
>
>> Hello All!
>>
>> We have a very old bug about the Capacity table on the Dashboard tab of
>> environment in Fuel:
>>
>> https://bugs.launchpad.net/fuel/+bug/1375750
>>
>> Current design:
>>
>> https://drive.google.com/open?id=0Bxi_JFs365mBNy1WT0xQT253SWc
>>
>> It shows the full capacity (CPU/Memory/HDD) of all discovered by Fuel
>> nodes.
>>
>> New design:
>>
>> https://drive.google.com/open?id=0Bxi_JFs365mBaWZ0cUtla3N6aEU
>>
>> It contains compute node CPU/Memory capacity and Ceph disk capacity only.
>>
>> New design pros:
>> - cloud administrator can easily estimate all available resources for
>> cloud instances
>>
>> New design cons:
>> - if cloud doesn’t use Ceph then HDD value is zero
>>
>> What do you think about the new design?
>>
>> With best regards, Dmitry
>>
>>
>
>
> --
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
>
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][requirements] nomination period started

2016-07-27 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-07-27 08:41:17 -0500:
> We've started a period of self nomination in preparation for the
> requirements project fully moving into project (as it's still under Doug
> Hellmann).
> 
> We are gathering the self nominations here before we vote next week.
> https://etherpad.openstack.org/p/requirements-ptl-newton
> 
> Nominees should also send an email to the openstack-dev list.
> 

Thanks for kicking this off, Matt! I'm looking forward to seeing this
team fully self-sufficient.

>From the etherpad, I see a deadline set of August 5. Is that for
nominations, or the election to have an outcome?

For the record, Anita has agreed to be the primary election official,
and I will assist her.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-ovn][networking-odl] Syncing neutron DB and OVN DB

2016-07-27 Thread Russell Bryant
On Wed, Jul 27, 2016 at 5:58 AM, Kevin Benton  wrote:

> > I'd like to see if we can solve the problems more generally.
>
> We've tried before but we very quickly run into competing requirements
> with regards to eventual consistency. For example, asynchronous background
> sync doesn't work if someone wants their backend to confirm that port
> details are acceptable (e.g. mac isn't in use by some other system outside
> of openstack). Then each backend has different methods for detecting what
> is out of sync (e.g. config numbers, hashes, or just full syncs on startup)
> that each come with their own requirements for how much data needs to be
> resent when an inconsistency is detected.
>
> If we can come to some common ground of what is required by all of them,
> then I would love to get some of this built into the ML2 framework.
> However, we've discussed this at meetups/mid-cycles/summits and it
> inevitably ends up with two people drawing furiously on a whiteboard,
> someone crying in the corner, and everyone else arguing about the lack of
> parametric polymorphism in Go.
>

​Ha, yes, makes sense that this is really hard to solve in a way that works
for everyone ...
​


> Even between OVN and ODL in this thread, it sounds like the only thing in
> common is a background worker that consumes from a queue of tasks in the
> db. Maybe realistically the only common thing we can come up with is a
> taskflow queue stored in the DB to solve the multiple workers issue...
>

​To clarify, ODL has this background worker and the discussion was whether
OVN should try to follow a similar approach.

So far, my gut feeling is that it's far too complicated for the problems it
would solve.  There's one identified multiple-worker related race condition
on updates, but I think we can solve that another way.​



> On Tue, Jul 26, 2016 at 11:31 AM, Russell Bryant 
> wrote:
>
>>
>>
>> On Fri, Jul 22, 2016 at 7:51 AM, Numan Siddique 
>> wrote:
>>
>>> Thanks for the comments Amitabha.
>>> Please see comments inline
>>>
>>> On Fri, Jul 22, 2016 at 5:50 AM, Amitabha Biswas 
>>> wrote:
>>>
 Hi Numan,

 Thanks for the proposal. We have also been thinking about this use-case.

 If I’m reading this accurately (and I may not be), it seems that the
 proposal is to not have any OVN NB (CUD) operations (R operations outside
 the scope) done by the api_worker threads but rather by a new journal
 thread.


>>> Correct.
>>> ​
>>>
>>>
 If this is indeed the case, I’d like to consider the scenario when
 there any N neutron nodes, each node with M worker threads. The journal
 thread at the each node contain list of pending operations. Could there be
 (sequence) dependency in the pending operations amongst each the journal
 threads in the nodes that prevents them from getting applied (for e.g.
 Logical_Router_Port and Logical_Switch_Port inter-dependency), because we
 are returning success on neutron operations that have still not been
 committed to the NB DB.


>>> I
>>> ​ts a valid scenario and should be designed properly to handle such
>>> scenarios in case we take this approach.
>>>
>>
>> ​I believe a new table in the Neutron DB is used to synchronize all of
>> the journal threads.
>> ​
>> Also note that OVN currently has no custom tables in the Neutron database
>> and it would be *very* good to keep it that way if we can.
>>
>>
>>>
>>> ​
>>>
 Couple of clarifications and thoughts below.

 Thanks
 Amitabha 

 On Jul 13, 2016, at 1:20 AM, Numan Siddique 
 wrote:

 Adding the proper tags in subject

 On Wed, Jul 13, 2016 at 1:22 PM, Numan Siddique 
 wrote:

> Hi Neutrinos,
>
> Presently, In the OVN ML2 driver we have 2 ways to sync neutron DB and
> OVN DB
>  - At neutron-server startup, OVN ML2 driver syncs the neutron DB and
> OVN DB if sync mode is set to repair.
>  - Admin can run the "neutron-ovn-db-sync-util" to sync the DBs.
>
> Recently, in the v2 of networking-odl ML2 driver (Please see (1) below
> which has more details). (ODL folks please correct me if I am wrong here)
>
>   - a journal thread is created which does the CRUD operations of
> neutron resources asynchronously (i.e it sends the REST APIs to the ODL
> controller).
>

 Would this be the equivalent of making OVSDB transactions to the OVN NB
 DB?

>>>
>>> ​Correct.
>>> ​
>>>
>>>

   - a maintenance thread is created which does some cleanup
> periodically and at startup does full sync if it detects ODL controller
> cold reboot.
>
>
> Few question I have
>  - can OVN ML2 driver take same or similar approach. Are there any
> advantages in taking this approach ? One advantage is neutron resources 
> can
> be created/updated/deleted even if the OVN ML2 driver has lost connection
> to the ovsdb-server. The journal thread would event

[openstack-dev] [nova][rfc] Booting docker images using nova libvirt

2016-07-27 Thread Sudipta Biswas

*Premise:**
*
While working with customers, we have realized:

- They want to use containers but are wary of using the same host kernel 
for multiple containers.
- They already have a significant investment (including skills) in 
OpenStack's Virtual Machine workflow and would like to re-use it as much 
as possible.

- They are very interested in using docker images.

There are some existing approaches like Hyper, Secure Containers 
workflows which already tries to address the first point. But we wanted 
to arrive at an approach that addresses all the above three in context 
of OpenStack Nova with minimalist changes.


*
**Design Considerations:*

We tried a few experiments with the present libvirt driver in nova to 
accomplish a work flow to deploy containers inside virtual machines in 
OpenStack via Nova.


The fundamental premise of our approach is to run a single container 
encapsulated in a single VM. This VM image just has a bare minimum 
operating system required to run it.


The container filesystem comes from the docker image.

We would like to get the feedback on the below approaches from the 
community before proposing this as a spec or blueprint.



*Approach 1*

User workflow:

1. The docker image is obtained in the form of a tar file.
2. Upload this tar file in glance. This support is already there in 
glance were a container-type of docker is supported.
3. Use this image along with nova libvirt driver to deploy a virtual 
machine.


Following are some of the changes to the OpenStack code that implements 
this approach:


1. Define a new conf parameter in nova called – 
/base_vm_image/=/var/lib/libvirt/images/baseimage.qcow2

This option is used to specify the base VM image.

2. define a new /sub_virt_type/ = container in nova conf. Setting this 
parameter will ensure mounting of the container filesystem inside the VM.
Unless qemu and kvm are used as virt_type – this workflow will not work 
at this moment.


3. In the virt/libvirt/driver.py we do the following based on the 
sub_virt_type = container:


- We create a qcow2 disk from the /base_vm_image/ and expose that 'disk' 
as the boot disk for the virtual machine.
 Note – this is very similar to a regular virtual machine boot minus 
the fact that the image is not downloaded from

glance but instead it is present on the host.


- We download the docker image into the //var/lib/nova/instances/_base 
directory/ and then for each new virtual machine boot – we create a new 
directory //var/lib/nova/instances// as it's and copy the 
docker filesystem to it. Note – there are subsequent improvements to 
this idea that could be performed around the lines of using a union 
filesystem approach.


- The step above allows each virtual machine to have a different copy of 
the filesystem.


- We create a '/passthrough/' mount of the filesystem via libvirt. This 
code is also present in the nova libvirt driver and we just trigger it 
based on our sub_virt_type parameter.


4. A cloud init – userdata is provided that looks somewhat like this:
/
//runcmd://
//  - mount -t 9p -o trans=virtio share_dir /mnt//
//  - chroot /mnt /bin//

The /command_to_run /is usually the entrypoint to for the docker image.

There could be better approaches to determine the entrypoint as well 
(say from docker image metadata).


*
**Approach 2.*

In this approach, the workflow remains the same as the first one with 
the exception that the
docker image is changed into a qcow2 image using a tool like 
virt-make-fs before uploading it to glance, instead of a tar file.


A tool like virt-make-fs can convert a tar file to a qcow2 image very 
easily.


This image is then downloaded on the compute node and a qcow2 disk is 
created/attached to the virtual machine that boots using the 
/base_vm_image/.



*Approach 3*

A custom qcow2 image is created using kernel, initramfs and the docker 
image and uploaded to glance.  No changes are needed in openstack nova. 
It boots as a regular VM.


Changes will be needed in image generation tools and will involve few 
additional tasks from an operator point of view.



I look forward to your comments/suggestions on the above.


Thanks,

Sudipto

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] About the new meeting channel

2016-07-27 Thread jason
Hi, team

Our meeting channel request has been approved:
https://review.openstack.org/#/c/346534/ .

So let's use the new channel for the coming irc meeting.


-- 
Yours,
Jason

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][requirements] nomination period started

2016-07-27 Thread Matthew Thode
On 07/27/2016 09:07 AM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2016-07-27 08:41:17 -0500:
>> We've started a period of self nomination in preparation for the
>> requirements project fully moving into project (as it's still under Doug
>> Hellmann).
>>
>> We are gathering the self nominations here before we vote next week.
>> https://etherpad.openstack.org/p/requirements-ptl-newton
>>
>> Nominees should also send an email to the openstack-dev list.
>>
> 
> Thanks for kicking this off, Matt! I'm looking forward to seeing this
> team fully self-sufficient.
> 
> From the etherpad, I see a deadline set of August 5. Is that for
> nominations, or the election to have an outcome?
> 
> For the record, Anita has agreed to be the primary election official,
> and I will assist her.
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

That's for the self nomination period.  Immediately after that is when
the election would be, we may vote in the meeting that day for just one
of us to be put forward or do a normal election, I don't think we've
clarified that yet.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][requirements] nomination period started

2016-07-27 Thread Tony Breeds
On Wed, Jul 27, 2016 at 10:07:20AM -0400, Doug Hellmann wrote:

> For the record, Anita has agreed to be the primary election official,
> and I will assist her.

Thanks Doug and Anita!

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt

2016-07-27 Thread Maxime Belanger
In my opinion,


You are loosing so much advantages of the what a container platform already 
offer.


Example (not exhaustive):

  1.  Glance is becoming your "Image Registry"
 *   No incremental pull
 *   No image layer caching
 *   Decrease in speed
 *   You have to convert from a Container image to a qcow2 image format 
(loosing time here and not imcremental)
  2.  One container per VM is exactly identical as having one service per VM
 *   Only advantage is that your deployment recipes are less complicated
  3.  Scaling the app
 *   Having to use Heat to scale a container (actually a vm).

I understand why your client are asking for this as we are (as a first step) 
doing one container per VM because our deployment architecture is not yet ready 
for full container stack. But There is other way of doing what your client is 
asking without implementing anything in Nova.
That said, there is Magnum project to support containers in an Openstack 
environment.

Quite frankly, I am not sure nova should deal with containers as I do not see 
the link with current Nova responsibilities.

Regards,
Max


From: Sudipta Biswas 
Sent: July 27, 2016 10:17:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt


Premise:

While working with customers, we have realized:

- They want to use containers but are wary of using the same host kernel for 
multiple containers.
- They already have a significant investment (including skills) in OpenStack's 
Virtual Machine workflow and would like to re-use it as much as possible.
- They are very interested in using docker images.

There are some existing approaches like Hyper, Secure Containers workflows 
which already tries to address the first point. But we wanted to arrive at an 
approach that addresses all the above three in context of OpenStack Nova with 
minimalist changes.


Design Considerations:

We tried a few experiments with the present libvirt driver in nova to 
accomplish a work flow to deploy containers inside virtual machines in 
OpenStack via Nova.

The fundamental premise of our approach is to run a single container 
encapsulated in a single VM. This VM image just has a bare minimum operating 
system required to run it.

The container filesystem comes from the docker image.

We would like to get the feedback on the below approaches from the community 
before proposing this as a spec or blueprint.


Approach 1

User workflow:

1. The docker image is obtained in the form of a tar file.
2. Upload this tar file in glance. This support is already there in glance were 
a container-type of docker is supported.
3. Use this image along with nova libvirt driver to deploy a virtual machine.

Following are some of the changes to the OpenStack code that implements this 
approach:

1. Define a new conf parameter in nova called – 
base_vm_image=/var/lib/libvirt/images/baseimage.qcow2
This option is used to specify the base VM image.

2. define a new sub_virt_type = container in nova conf. Setting this parameter 
will ensure mounting of the container filesystem inside the VM.
Unless qemu and kvm are used as virt_type – this workflow will not work at this 
moment.

3. In the virt/libvirt/driver.py we do the following based on the sub_virt_type 
= container:

- We create a qcow2 disk from the base_vm_image and expose that 'disk' as the 
boot disk for the virtual machine.
 Note – this is very similar to a regular virtual machine boot minus the fact 
that the image is not downloaded from
glance but instead it is present on the host.


- We download the docker image into the /var/lib/nova/instances/_base directory 
and then for each new virtual machine boot – we create a new directory 
/var/lib/nova/instances/ as it's and copy the docker filesystem 
to it. Note – there are subsequent improvements to this idea that could be 
performed around the lines of using a union filesystem approach.

- The step above allows each virtual machine to have a different copy of the 
filesystem.

- We create a 'passthrough' mount of the filesystem via libvirt. This code is 
also present in the nova libvirt driver and we just trigger it based on our 
sub_virt_type parameter.

4. A cloud init – userdata is provided that looks somewhat like this:

runcmd:
  - mount -t 9p -o trans=virtio share_dir /mnt
  - chroot /mnt /bin/

The command_to_run is usually the entrypoint to for the docker image.

There could be better approaches to determine the entrypoint as well (say from 
docker image metadata).


Approach 2.

In this approach, the workflow remains the same as the first one with the 
exception that the
docker image is changed into a qcow2 image using a tool like virt-make-fs 
before uploading it to glance, instead of a tar file.

A tool like virt-make-fs can convert a tar file to a qcow2 image very easily.

This image is then downloaded on the compute node an

[openstack-dev] [ironic][nova] Indivisible Resource Providers

2016-07-27 Thread Sam Betts (sambetts)
While discussing the proposal to add resource_class' to Ironic nodes for 
interacting with the resource provider system in Nova with Jim on IRC, I voiced 
my concern about having a resource_class per node. My thoughts were that we 
could achieve the behaviour we require by every Ironic node resource provider 
having a "baremetal" resource class of which they can own a maximum of 1. 
Flavor's that are required to land on a baremetal node would then define that 
they require at least 1 baremetal resource, along with any other resources they 
require.  For example:

Resource Provider 1 Resources:
Baremetal: 1
RAM: 256
CPUs: 4

Resource Provider 2 Resources:
Baremetal: 1
RAM: 512
CPUs: 4

Resource Provider 3 Resources:
Baremetal: 0
RAM: 0
CPUs: 0

(Resource Provider 3 has been used, so it has zero resources left)

 Given the thought experiment it seems like this would work great with one 
exception, if you define 2 flavors:

Flavor 1 Required Resources:
Baremetal: 1
RAM: 256

Flavor 2 Required Resources:
Baremetal: 1
RAM: 512

Flavor 2 will only schedule onto Resource Provider 2 because it is the only 
resource provider that can provide the amount of resources required. However 
Flavor 1 could potentially end up landing on Resource Provider 2 even though it 
provides more RAM than is actually required. The Baremetal resource class would 
prevent a second node from ever being scheduled onto that resource provider, so 
scheduling more nodes doesn't result on 2 instance on the same node, but it is 
an inefficient use of resources.

To combat this inefficient use of resources, I wondered if it was possible to 
add a flag to a resource provider to define that it is an indivisible resource 
provider, which would prevent flavors that don't use up all the resources a 
provider provides from landing on that provider.

Sam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-07-27 Thread James E. Blair
Michał Dulko  writes:

> Just wondering - were there tries to implement syntax highlighting in
> diff view? I think that's the only thing that keeps me from switching to
> Gertty.

I don't know of anyone working on that, but I suspect it could be done
using the pygments library.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][requirements] nomination period started

2016-07-27 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-07-27 09:19:33 -0500:
> On 07/27/2016 09:07 AM, Doug Hellmann wrote:
> > Excerpts from Matthew Thode's message of 2016-07-27 08:41:17 -0500:
> >> We've started a period of self nomination in preparation for the
> >> requirements project fully moving into project (as it's still under Doug
> >> Hellmann).
> >>
> >> We are gathering the self nominations here before we vote next week.
> >> https://etherpad.openstack.org/p/requirements-ptl-newton
> >>
> >> Nominees should also send an email to the openstack-dev list.
> >>
> > 
> > Thanks for kicking this off, Matt! I'm looking forward to seeing this
> > team fully self-sufficient.
> > 
> > From the etherpad, I see a deadline set of August 5. Is that for
> > nominations, or the election to have an outcome?
> > 
> > For the record, Anita has agreed to be the primary election official,
> > and I will assist her.
> > 
> > Doug
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> That's for the self nomination period.  Immediately after that is when
> the election would be, we may vote in the meeting that day for just one
> of us to be put forward or do a normal election, I don't think we've
> clarified that yet.
> 

OK. We have some time to work that out, then.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] mascot/logo ideas

2016-07-27 Thread David Moreau Simard
What's even the relation between a wolf and puppet ?

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Jul 27, 2016 9:36 AM, "Emilien Macchi"  wrote:

> Looking at the poll we have 4 votes for the wolf, I guess we can go
> with it. Tardigrade looses by 2 votes.
>
> If anyone is against this vote please raise your hand now :-)
>
> On Tue, Jul 26, 2016 at 4:26 PM, Emilien Macchi 
> wrote:
> > We have 6 votes in total, results are:
> >
> > 2 votes for wolf.
> > 2 votes for  tardigrade - https://en.wikipedia.org/wiki/Tardigrade
> > 1 vote for axolotl - https://en.wikipedia.org/wiki/Axolotl
> > 1 vote for dog puppet:
> > https://img1.etsystatic.com/000/0/5613081/il_fullxfull.241000707.jpg
> >
> > Sounds like we haven't reached a consensus and failed to get more
> > votes... I would propose to either report our vote or cancel and
> > choose no mascot.
> > Thoughts?
> >
> > On Mon, Jul 25, 2016 at 10:27 PM, Emilien Macchi 
> wrote:
> >> Hi,
> >>
> >> So we have until July 27th to take the decision about our mascot.
> >> If you are interested to vote, please add +1 on the proposals on the
> >> etherpad [1].
> >>
> >> By Wednesday, we'll take the one with the most of +1
> >>
> >> Thanks,
> >>
> >> [1] https://etherpad.openstack.org/p/puppet-openstack-mascot-logo
> >>
> >> On Tue, Jul 12, 2016 at 11:23 AM, Emilien Macchi 
> wrote:
> >>> Hey,
> >>>
> >>> During the meeting we decided to use etherpad to submit new ideas for
> >>> our mascot / logo [1]:
> >>> https://etherpad.openstack.org/p/puppet-openstack-mascot-logo
> >>>
> >>> Feel free to use your imagination as long you stay SFW :-)
> >>>
> >>> Thanks,
> >>>
> >>> [1] http://osdir.com/ml/openstack-dev/2016-07/msg00456.html
> >>> --
> >>> Emilien Macchi
> >>
> >>
> >>
> >> --
> >> Emilien Macchi
> >
> >
> >
> > --
> > Emilien Macchi
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] reminder to release libraries early and often

2016-07-27 Thread Doug Hellmann
We're coming up on the feature and release freeze for libraries, and
quite a few of them have unreleased changes. Keep in mind that changes
to any branch of the library that are not included in a tagged release
are not being used in the functional or integration tests for server
projects, so the longer we wait to release them the less testing time we
have with them and the more risk we're building up.

Release liaisons, please review http://paste.openstack.org/show/542607/
for the list of unreleased changes of your libraries and prepare a
release for this week or next week.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Modifying just a few values on overcloud redeploy

2016-07-27 Thread Adam Young

On 07/27/2016 06:04 AM, Steven Hardy wrote:

On Tue, Jul 26, 2016 at 05:23:21PM -0400, Adam Young wrote:

I worked through how to do a complete clone of the templates to do a
deploy and change a couple values here:

http://adam.younglogic.com/2016/06/custom-overcloud-deploys/

However, all I want to do is to set two config options in Keystone.  Is
there a simple way to just modify the two values below?  Ideally, just
making a single env file and passing it via openstack overcloud deploy -e
somehow.

'identity/domain_specific_drivers_enabled': value => 'True';

'identity/domain_configurations_from_database': value => 'True';

Yes, the best way to do this is to pass a hieradata override, as documented
here:

http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html

First step is to look at the puppet module that manages that configuration,
in this case I assume it's puppet-keystone:

https://github.com/openstack/puppet-keystone/tree/master/manifests

Some grepping shows that domain_specific_drivers_enabled is configured
here:

https://github.com/openstack/puppet-keystone/blob/master/manifests/init.pp#L1124..L1155

So working back from those variables, "using_domain_config" and
"domain_config_directory", you'd create a yaml file that looks like:

parameter_defaults:
   ControllerExtraConfig:
 keystone::using_domain_config: true
 keystone::domain_config_directory: /path/to/config

However, it seems that you want to configure domain_specific_drivers_enabled
*without* configuring domain_config_directory, so that it comes from the
database?

In that case, puppet has a "passthrough" interface you can use (this is the
same for all openstack puppet modules AFAIK):

https://github.com/openstack/puppet-keystone/blob/master/manifests/config.pp

Environment (referred to as controller_extra.yaml below) file looks like:

parameter_defaults:
   ControllerExtraConfig:
 keystone::config::keystone_config:
   identity/domain_specific_drivers_enabled:
 value: true
   identity/domain_configurations_from_database:
 value: true

I'm assuming I can mix these two approaches, so that, if I need

keystone::using_domain_config: true



as well it would look like this:

parameter_defaults:
  ControllerExtraConfig:
keystone::using_domain_config: true
keystone::config::keystone_config:
  identity/domain_specific_drivers_enabled:
value: true
  identity/domain_configurations_from_database:
value: true


And over time,  if there is support put into the templates for the 
values, and we start seeing errors, we can just change from the latter  
approach to the one you posted earlier?

Note the somewhat idiosyncratic syntax, you pass the value via a
"value: foo" map, not directly to the configuration key (don't ask me why!)

Then do openstack overcloud deploy --templates /path/to/templates -e 
controller_extra.yaml

The one gotcha here is if puppet keystone later adds an explicit interface
which conflicts with this, e.g a domain_specific_drivers_enabled variable
in the above referenced init.pp, you will get a duplicate definition error
(because you can't define the same thing twice in the puppet catalog).

This means that long-term use of the generic keystone::config::keystone_config
interface can be fragile, so it's best to add an explicit e.g
keystone::domain_specific_drivers_enabled interface if this is a long-term
requirement.

This is probably something we should add to our docs, I'll look at doing
that.

Hope that helps,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] no neutron-lib meeting today

2016-07-27 Thread Henry Gessau
Myself and dougwig will be unable to attend today.
We'll resume as usual next week.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] - Addition of Fuxi as a subproject

2016-07-27 Thread Gal Sagie
Hello everyone,

The following is a governance request to add project Fuxi as a storage
solution part
for Kuryr (part of Kuryr deliverable) [1]

I hope this can lead to other initiatives that want to work on drivers and
glue code that
connects containers orchestration engines and OpenStack projects.
(For example i believe a Keystone driver for Kubernetes can also be started
as a sub
project)

Please feel free to share your opinions/comments in the mailing list or in
the patch review.

[1] https://review.openstack.org/#/c/347083/


Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-07-27 Thread Masayuki Igawa
Hi!

On Wed, Jul 27, 2016 at 11:50 PM, James E. Blair  wrote:
> Michał Dulko  writes:
>
>> Just wondering - were there tries to implement syntax highlighting in
>> diff view? I think that's the only thing that keeps me from switching to
>> Gertty.
>
> I don't know of anyone working on that, but I suspect it could be done
> using the pygments library.

Oh, it's an interesting feature to me :) I'll try to investigate and
implement in next couple of days :)

Thanks,
-- Masayuki Igawa

>
> -Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] mascot/logo ideas

2016-07-27 Thread Emilien Macchi
On Wed, Jul 27, 2016 at 10:55 AM, David Moreau Simard  wrote:
> What's even the relation between a wolf and puppet ?

https://www.thegiftexperience.co.uk/cms_media/images/600x1000_fitbox-wolf_puppet_a.jpg

That's the only thing I found on the Internet.

> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
>
> On Jul 27, 2016 9:36 AM, "Emilien Macchi"  wrote:
>>
>> Looking at the poll we have 4 votes for the wolf, I guess we can go
>> with it. Tardigrade looses by 2 votes.
>>
>> If anyone is against this vote please raise your hand now :-)
>>
>> On Tue, Jul 26, 2016 at 4:26 PM, Emilien Macchi 
>> wrote:
>> > We have 6 votes in total, results are:
>> >
>> > 2 votes for wolf.
>> > 2 votes for  tardigrade - https://en.wikipedia.org/wiki/Tardigrade
>> > 1 vote for axolotl - https://en.wikipedia.org/wiki/Axolotl
>> > 1 vote for dog puppet:
>> > https://img1.etsystatic.com/000/0/5613081/il_fullxfull.241000707.jpg
>> >
>> > Sounds like we haven't reached a consensus and failed to get more
>> > votes... I would propose to either report our vote or cancel and
>> > choose no mascot.
>> > Thoughts?
>> >
>> > On Mon, Jul 25, 2016 at 10:27 PM, Emilien Macchi 
>> > wrote:
>> >> Hi,
>> >>
>> >> So we have until July 27th to take the decision about our mascot.
>> >> If you are interested to vote, please add +1 on the proposals on the
>> >> etherpad [1].
>> >>
>> >> By Wednesday, we'll take the one with the most of +1
>> >>
>> >> Thanks,
>> >>
>> >> [1] https://etherpad.openstack.org/p/puppet-openstack-mascot-logo
>> >>
>> >> On Tue, Jul 12, 2016 at 11:23 AM, Emilien Macchi 
>> >> wrote:
>> >>> Hey,
>> >>>
>> >>> During the meeting we decided to use etherpad to submit new ideas for
>> >>> our mascot / logo [1]:
>> >>> https://etherpad.openstack.org/p/puppet-openstack-mascot-logo
>> >>>
>> >>> Feel free to use your imagination as long you stay SFW :-)
>> >>>
>> >>> Thanks,
>> >>>
>> >>> [1] http://osdir.com/ml/openstack-dev/2016-07/msg00456.html
>> >>> --
>> >>> Emilien Macchi
>> >>
>> >>
>> >>
>> >> --
>> >> Emilien Macchi
>> >
>> >
>> >
>> > --
>> > Emilien Macchi
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Modifying just a few values on overcloud redeploy

2016-07-27 Thread Steven Hardy
On Wed, Jul 27, 2016 at 11:04:22AM -0400, Adam Young wrote:
> On 07/27/2016 06:04 AM, Steven Hardy wrote:
> > On Tue, Jul 26, 2016 at 05:23:21PM -0400, Adam Young wrote:
> > > I worked through how to do a complete clone of the templates to do a
> > > deploy and change a couple values here:
> > > 
> > > http://adam.younglogic.com/2016/06/custom-overcloud-deploys/
> > > 
> > > However, all I want to do is to set two config options in Keystone.  
> > > Is
> > > there a simple way to just modify the two values below?  Ideally, just
> > > making a single env file and passing it via openstack overcloud 
> > > deploy -e
> > > somehow.
> > > 
> > > 'identity/domain_specific_drivers_enabled': value => 'True';
> > > 
> > > 'identity/domain_configurations_from_database': value => 'True';
> > Yes, the best way to do this is to pass a hieradata override, as documented
> > here:
> > 
> > http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html
> > 
> > First step is to look at the puppet module that manages that configuration,
> > in this case I assume it's puppet-keystone:
> > 
> > https://github.com/openstack/puppet-keystone/tree/master/manifests
> > 
> > Some grepping shows that domain_specific_drivers_enabled is configured
> > here:
> > 
> > https://github.com/openstack/puppet-keystone/blob/master/manifests/init.pp#L1124..L1155
> > 
> > So working back from those variables, "using_domain_config" and
> > "domain_config_directory", you'd create a yaml file that looks like:
> > 
> > parameter_defaults:
> >ControllerExtraConfig:
> >  keystone::using_domain_config: true
> >  keystone::domain_config_directory: /path/to/config
> > 
> > However, it seems that you want to configure domain_specific_drivers_enabled
> > *without* configuring domain_config_directory, so that it comes from the
> > database?
> > 
> > In that case, puppet has a "passthrough" interface you can use (this is the
> > same for all openstack puppet modules AFAIK):
> > 
> > https://github.com/openstack/puppet-keystone/blob/master/manifests/config.pp
> > 
> > Environment (referred to as controller_extra.yaml below) file looks like:
> > 
> > parameter_defaults:
> >ControllerExtraConfig:
> >  keystone::config::keystone_config:
> >identity/domain_specific_drivers_enabled:
> >  value: true
> >identity/domain_configurations_from_database:
> >  value: true
> I'm assuming I can mix these two approaches, so that, if I need
> 
> keystone::using_domain_config: true
> 
> 
> 
> as well it would look like this:
> 
> parameter_defaults:
>   ControllerExtraConfig:
> keystone::using_domain_config: true
> keystone::config::keystone_config:
>   identity/domain_specific_drivers_enabled:
> value: true
>   identity/domain_configurations_from_database:
> value: true

Yes, but I think you'll need to remove the
domain_specific_drivers_enabled because that is already set to true via
using_domain_config (see my earlier puppet-keystone link)

So it would look like:

parameter_defaults:
  ControllerExtraConfig:
keystone::using_domain_config: true
keystone::config::keystone_config:
  identity/domain_configurations_from_database:
value: true

As previously mentioned, there appears to be some validation of
keystone::domain_config_directory when you enable
keystone::using_domain_config so you'll probably need to pass both.

> And over time,  if there is support put into the templates for the values,
> and we start seeing errors, we can just change from the latter  approach to
> the one you posted earlier?

Yes, this is possible, but it's been a cause of a few CI firedrills, so we
might consider just wiring in the needed interfaces to puppet-keystone now
when you figure out exactly what's required.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Fox, Kevin M
Competition is a good thing, when there are good, technical reasons for it. 
"The architecture of X just doesn't fit for my need Y". "Project X won't 
address my technical need, so I need to fork/spawn a new project Y to get a 
solution." I do not believe we're in this situation here.

If its just competition because developer X doesn't want to work with developer 
Y, thats fine too, provided that the community isn't paying for both.

Our collective resource is somewhat limited. We have a relatively static pool 
of gate resources, and infra folks. Nearing releases, those get particularly 
scarce/valuable. We all notice it during those times and if we are spending 
resource on needlessly competing things, thats bad. Why take the pain?

As I see it, right now Fuel CCP seems separate for political, not technical 
reasons and is consuming OpenStack community resource for what seems to be the 
benefit of only one company. Its fine that Fuel CCP exists. But I think either 
it should have its own non OpenStack infra, or commit to joining the Big Tent 
and we can debate if we want 2 basically identical things inside.

Thanks,
Kevin


From: Michael Still [mi...@stillhq.com]
Sent: Wednesday, July 27, 2016 5:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting 
Fuel CCP (docker/k8s) kicked off

On Tue, Jul 26, 2016 at 4:44 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

[snip]

The issue is, as I see it, a parallel activity to one of the that is currently 
accepted into the Big Tent, aka Containerized Deployment

[snip]

This seems to be the crux of the matter as best as I can tell. Is it true to 
say that the concern is that Kolla believes they "own" the containerized 
deployment space inside the Big Tent?

Whether to have competing projects in the big tent was debated by the TC at the 
time and my recollection is that we decided that was a good thing -- if someone 
wanted to develop a Nova replacement, then let them do it in public with the 
community. It would either win or lose based on its merits. Why is this not 
something which can happen here as well?

I guess I should also point out that there is at least one other big tent 
deployment tool deploying containerized openstack components now, so its not 
like this idea is unique or new. Perhaps using kubernetes makes it different 
somehow, but I don't see it.

Michael




--
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] repo split

2016-07-27 Thread Steven Dake (stdake)
Martin,

Thanks for your feedback.  This vote has expired without a consensus.  I
think we need to try a new vote once the core team gets comfortable with
the backporting process I spoke about earlier in a different thread this
vote would likely pass.

Regards
-steve

On 7/27/16, 3:09 AM, "Martin André"  wrote:

>On Thu, Jul 21, 2016 at 5:21 PM, Steven Dake (stdake) 
>wrote:
>> I am voting -1 for now, but would likely change my vote after we branch
>> Newton.  I'm not a super big fan of votes way ahead of major events
>>(such
>> as branching) because a bunch of things could change between now and
>>then
>> and the vote would be binding.
>>
>> Still community called the vote - so vote stands :)
>
>IIUC, if split there is, it's scheduled for when we branch out Newton
>which is only 1 month ahead.
>
>I'm +1 on splitting ansible deployment code into kolla-ansible.
>
>Martin
>
>> Regards
>> -steve
>>
>>
>> On 7/20/16, 1:48 PM, "Ryan Hallisey"  wrote:
>>
>>>Hello.
>>>
>>>The repo split discussion that started at summit was brought up again at
>>>the midcycle.
>>>The discussion was focused around splitting the Docker containers and
>>>Ansible code into
>>>two separate repos [1].
>>>
>>>One of the main opponents to the split is backports.  Backports will
>>>need
>>>to be done
>>>by hand for a few releases.  So far, there hasn't been a ton of
>>>backports, but that could
>>>always change.
>>>
>>>As for splitting, it provides a much clearer view of what pieces of the
>>>project are where.
>>>Kolla-ansible with its own repo will sit along side kolla-kubernetes as
>>>consumers of the
>>>kolla repo.
>>>
>>>The target for the split will be for day 1 of Occata. The core team will
>>>vote on
>>>the change of splitting kolla into kolla-ansible and kolla.
>>>
>>>Cores please respond with a +1/-1 to approve or disapprove the repo
>>>split. Any community
>>>member feel free to weigh in with your opinion.
>>>
>>>+1
>>>-Ryan
>>>
>>>[1] - https://etherpad.openstack.org/p/kolla-N-midcycle-repo-split
>>>
>>>
>>>__
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt

2016-07-27 Thread Hongbin Lu
Unfortunately, this doesn't seem to fit into Magnum as well. According to the 
current mission statement [1], Magnum is for provisioning, scaling, and 
managing Container Orchestration Engines (i.e. Kubernetes), so managing 
containers in VM is not consistent with Magnum's mission.

It seems there is a misconception that Magnum is the home for containers in 
OpenStack, so all the container use cases were pushed to Magnum. However, this 
is not true because Magnum also has its scope and cannot support anything that 
goes beyond.

[1] 
https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2128

Best regards,
Hongbin

From: Maxime Belanger [mailto:mbelan...@internap.com]
Sent: July-27-16 10:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][rfc] Booting docker images using nova 
libvirt


In my opinion,



You are loosing so much advantages of the what a container platform already 
offer.



Example (not exhaustive):

  1.  Glance is becoming your "Image Registry"

 *   No incremental pull
 *   No image layer caching
 *   Decrease in speed
 *   You have to convert from a Container image to a qcow2 image format 
(loosing time here and not imcremental)

  1.  One container per VM is exactly identical as having one service per VM

 *   Only advantage is that your deployment recipes are less complicated

  1.  Scaling the app

 *   Having to use Heat to scale a container (actually a vm).
I understand why your client are asking for this as we are (as a first step) 
doing one container per VM because our deployment architecture is not yet ready 
for full container stack. But There is other way of doing what your client is 
asking without implementing anything in Nova.
That said, there is Magnum project to support containers in an Openstack 
environment.

Quite frankly, I am not sure nova should deal with containers as I do not see 
the link with current Nova responsibilities.

Regards,
Max

From: Sudipta Biswas 
mailto:sbisw...@linux.vnet.ibm.com>>
Sent: July 27, 2016 10:17:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt


Premise:

While working with customers, we have realized:

- They want to use containers but are wary of using the same host kernel for 
multiple containers.
- They already have a significant investment (including skills) in OpenStack's 
Virtual Machine workflow and would like to re-use it as much as possible.
- They are very interested in using docker images.

There are some existing approaches like Hyper, Secure Containers workflows 
which already tries to address the first point. But we wanted to arrive at an 
approach that addresses all the above three in context of OpenStack Nova with 
minimalist changes.


Design Considerations:

We tried a few experiments with the present libvirt driver in nova to 
accomplish a work flow to deploy containers inside virtual machines in 
OpenStack via Nova.

The fundamental premise of our approach is to run a single container 
encapsulated in a single VM. This VM image just has a bare minimum operating 
system required to run it.

The container filesystem comes from the docker image.

We would like to get the feedback on the below approaches from the community 
before proposing this as a spec or blueprint.


Approach 1

User workflow:

1. The docker image is obtained in the form of a tar file.
2. Upload this tar file in glance. This support is already there in glance were 
a container-type of docker is supported.
3. Use this image along with nova libvirt driver to deploy a virtual machine.

Following are some of the changes to the OpenStack code that implements this 
approach:

1. Define a new conf parameter in nova called - 
base_vm_image=/var/lib/libvirt/images/baseimage.qcow2
This option is used to specify the base VM image.

2. define a new sub_virt_type = container in nova conf. Setting this parameter 
will ensure mounting of the container filesystem inside the VM.
Unless qemu and kvm are used as virt_type - this workflow will not work at this 
moment.

3. In the virt/libvirt/driver.py we do the following based on the sub_virt_type 
= container:

- We create a qcow2 disk from the base_vm_image and expose that 'disk' as the 
boot disk for the virtual machine.
 Note - this is very similar to a regular virtual machine boot minus the fact 
that the image is not downloaded from
glance but instead it is present on the host.



- We download the docker image into the /var/lib/nova/instances/_base directory 
and then for each new virtual machine boot - we create a new directory 
/var/lib/nova/instances/ as it's and copy the docker filesystem 
to it. Note - there are subsequent improvements to this idea that could be 
performed around the lines of using a union filesystem approach.

- The step above allows each virtual machine to 

Re: [openstack-dev] [ironic][nova] Indivisible Resource Providers

2016-07-27 Thread Ed Leafe
On Jul 27, 2016, at 9:48 AM, Sam Betts (sambetts)  wrote:

> While discussing the proposal to add resource_class’ to Ironic nodes for 
> interacting with the resource provider system in Nova with Jim on IRC, I 
> voiced my concern about having a resource_class per node. My thoughts were 
> that we could achieve the behaviour we require by every Ironic node resource 
> provider having a "baremetal" resource class of which they can own a maximum 
> of 1. Flavor’s that are required to land on a baremetal node would then 
> define that they require at least 1 baremetal resource, along with any other 
> resources they require.

I was going to respond pointing out the issues with that approach, but then the 
rest of your email did just that. :)

I strongly preferred the approach that each particular hardware configuration 
would be a class, so that if you had 50 nodes with configuration A, and 20 
nodes with configuration B, that that would be reflected in two resource 
classes, with corresponding inventories to match the nodes. When a node is 
provisioned, that inventory is decremented. This would be much more consistent 
with the rest of the resource provider design, as having many, many classes all 
of which represent identical hardware seems backwards.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Joshua Harlow

Michael Still wrote:

On Tue, Jul 26, 2016 at 4:44 PM, Fox, Kevin M mailto:kevin@pnnl.gov>> wrote:

[snip]

The issue is, as I see it, a parallel activity to one of the that is
currently accepted into the Big Tent, aka Containerized Deployment


[snip]

This seems to be the crux of the matter as best as I can tell. Is it
true to say that the concern is that Kolla believes they "own" the
containerized deployment space inside the Big Tent?

Whether to have competing projects in the big tent was debated by the TC
at the time and my recollection is that we decided that was a good thing
-- if someone wanted to develop a Nova replacement, then let them do it
in public with the community. It would either win or lose based on its
merits. Why is this not something which can happen here as well?


For real, I (or someone) can start a nova replacement without getting 
rejected (or yelled at or ...) by the TC saying it's a competing 
project??? Wow, this is news to me...




I guess I should also point out that there is at least one other big
tent deployment tool deploying containerized openstack components now,
so its not like this idea is unique or new. Perhaps using kubernetes
makes it different somehow, but I don't see it.

Michael




--
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Ed Leafe
On Jul 27, 2016, at 10:51 AM, Joshua Harlow  wrote:

>> Whether to have competing projects in the big tent was debated by the TC
>> at the time and my recollection is that we decided that was a good thing
>> -- if someone wanted to develop a Nova replacement, then let them do it
>> in public with the community. It would either win or lose based on its
>> merits. Why is this not something which can happen here as well?
> 
> For real, I (or someone) can start a nova replacement without getting 
> rejected (or yelled at or ...) by the TC saying it's a competing project??? 
> Wow, this is news to me...

No, you can’t start a Nova replacement and still call yourself OpenStack.

The sense I have gotten over the years from the TC is that gratuitous 
competition is strongly discouraged. When the Monasca project was being 
considered for the big tent, there was a *lot* of concern expressed over the 
partial overlap with Ceilometer. It was only after much reassurance that the 
overlap was not fundamental that these objections were dropped.

I have no stake in either Fuel or Kolla, so my only concern is duplication of 
effort. You can always achieve more working together, though it will never 
happen as fast as when you go it alone. It’s a trade-off: the needs of the 
vendor vs. the health of the community.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] mascot/logo ideas

2016-07-27 Thread Gui Maluf
Wolf? it's so overrated
+1 for tardigrade


On Wed, Jul 27, 2016 at 12:39 PM, Emilien Macchi  wrote:

> On Wed, Jul 27, 2016 at 10:55 AM, David Moreau Simard 
> wrote:
> > What's even the relation between a wolf and puppet ?
>
>
> https://www.thegiftexperience.co.uk/cms_media/images/600x1000_fitbox-wolf_puppet_a.jpg
>
> That's the only thing I found on the Internet.
>
> > David Moreau Simard
> > Senior Software Engineer | Openstack RDO
> >
> > dmsimard = [irc, github, twitter]
> >
> >
> > On Jul 27, 2016 9:36 AM, "Emilien Macchi"  wrote:
> >>
> >> Looking at the poll we have 4 votes for the wolf, I guess we can go
> >> with it. Tardigrade looses by 2 votes.
> >>
> >> If anyone is against this vote please raise your hand now :-)
> >>
> >> On Tue, Jul 26, 2016 at 4:26 PM, Emilien Macchi 
> >> wrote:
> >> > We have 6 votes in total, results are:
> >> >
> >> > 2 votes for wolf.
> >> > 2 votes for  tardigrade - https://en.wikipedia.org/wiki/Tardigrade
> >> > 1 vote for axolotl - https://en.wikipedia.org/wiki/Axolotl
> >> > 1 vote for dog puppet:
> >> > https://img1.etsystatic.com/000/0/5613081/il_fullxfull.241000707.jpg
> >> >
> >> > Sounds like we haven't reached a consensus and failed to get more
> >> > votes... I would propose to either report our vote or cancel and
> >> > choose no mascot.
> >> > Thoughts?
> >> >
> >> > On Mon, Jul 25, 2016 at 10:27 PM, Emilien Macchi 
> >> > wrote:
> >> >> Hi,
> >> >>
> >> >> So we have until July 27th to take the decision about our mascot.
> >> >> If you are interested to vote, please add +1 on the proposals on the
> >> >> etherpad [1].
> >> >>
> >> >> By Wednesday, we'll take the one with the most of +1
> >> >>
> >> >> Thanks,
> >> >>
> >> >> [1] https://etherpad.openstack.org/p/puppet-openstack-mascot-logo
> >> >>
> >> >> On Tue, Jul 12, 2016 at 11:23 AM, Emilien Macchi  >
> >> >> wrote:
> >> >>> Hey,
> >> >>>
> >> >>> During the meeting we decided to use etherpad to submit new ideas
> for
> >> >>> our mascot / logo [1]:
> >> >>> https://etherpad.openstack.org/p/puppet-openstack-mascot-logo
> >> >>>
> >> >>> Feel free to use your imagination as long you stay SFW :-)
> >> >>>
> >> >>> Thanks,
> >> >>>
> >> >>> [1] http://osdir.com/ml/openstack-dev/2016-07/msg00456.html
> >> >>> --
> >> >>> Emilien Macchi
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Emilien Macchi
> >> >
> >> >
> >> >
> >> > --
> >> > Emilien Macchi
> >>
> >>
> >>
> >> --
> >> Emilien Macchi
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*guilherme* \n
\t *maluf*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] repo split

2016-07-27 Thread Ryan Hallisey
Agreed, it's been a week. The vote can be brought up again when
the conclusion of N is closer.

-Ryan

- Original Message -
From: "Steven Dake (stdake)" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, July 27, 2016 11:46:44 AM
Subject: Re: [openstack-dev] [kolla] repo split

Martin,

Thanks for your feedback.  This vote has expired without a consensus.  I
think we need to try a new vote once the core team gets comfortable with
the backporting process I spoke about earlier in a different thread this
vote would likely pass.

Regards
-steve

On 7/27/16, 3:09 AM, "Martin André"  wrote:

>On Thu, Jul 21, 2016 at 5:21 PM, Steven Dake (stdake) 
>wrote:
>> I am voting -1 for now, but would likely change my vote after we branch
>> Newton.  I'm not a super big fan of votes way ahead of major events
>>(such
>> as branching) because a bunch of things could change between now and
>>then
>> and the vote would be binding.
>>
>> Still community called the vote - so vote stands :)
>
>IIUC, if split there is, it's scheduled for when we branch out Newton
>which is only 1 month ahead.
>
>I'm +1 on splitting ansible deployment code into kolla-ansible.
>
>Martin
>
>> Regards
>> -steve
>>
>>
>> On 7/20/16, 1:48 PM, "Ryan Hallisey"  wrote:
>>
>>>Hello.
>>>
>>>The repo split discussion that started at summit was brought up again at
>>>the midcycle.
>>>The discussion was focused around splitting the Docker containers and
>>>Ansible code into
>>>two separate repos [1].
>>>
>>>One of the main opponents to the split is backports.  Backports will
>>>need
>>>to be done
>>>by hand for a few releases.  So far, there hasn't been a ton of
>>>backports, but that could
>>>always change.
>>>
>>>As for splitting, it provides a much clearer view of what pieces of the
>>>project are where.
>>>Kolla-ansible with its own repo will sit along side kolla-kubernetes as
>>>consumers of the
>>>kolla repo.
>>>
>>>The target for the split will be for day 1 of Occata. The core team will
>>>vote on
>>>the change of splitting kolla into kolla-ansible and kolla.
>>>
>>>Cores please respond with a +1/-1 to approve or disapprove the repo
>>>split. Any community
>>>member feel free to weigh in with your opinion.
>>>
>>>+1
>>>-Ryan
>>>
>>>[1] - https://etherpad.openstack.org/p/kolla-N-midcycle-repo-split
>>>
>>>
>>>__
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-07-27 Thread Dougal Matthews
On Wednesday, 27 July 2016, Steven Hardy  wrote:

> On Wed, Jul 27, 2016 at 02:08:08PM +0100, Dougal Matthews wrote:
> >On 27 July 2016 at 12:41, Honza Pokorny  > wrote:
> >
> >  Hello folks,
> >
> >  As the tripleo-ui project is quickly maturing, it might be time to
> start
> >  versioning our code.  As of now, the version is set to 0.0.1 and
> that
> >  hardly reflects the state of the project.
> >
> >  What do you think?
> >
> >Yup, Sounds good to me! I would suggest that we make the Newton
> >release 1.0 and then continue from there. I am not sure what the
> >normal pattern is tho'
>
> No, please don't invent an independent versioning scheme.  tripleo UI
> should be a tripleo deliverable, and part of the coordinated release (see
> my reply to Honza).


Aha, sorry, I must have misunderstood how things are handled generally. I
was remembering how tripleoclient was first versioned. Is there a document
somewhere coving the process? All I can find is the release management wiki
page.


>
> Thanks!
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Fox, Kevin M
Sorry, missed part of this.

I do not believe there is overlap between openstack-ansible which uses lxc 
containerization with thick containers and kolla which uses docker/kubernetes 
with thin containers. These are architecturally very different things and to 
reference my other email, there are technical reasons for doing things each way.

The Fuel CCP case is different, in that it is doing the same technical thing as 
kolla. kubernetes managed, docker based thin containers.

Thanks,
Kevin


From: Michael Still [mi...@stillhq.com]
Sent: Wednesday, July 27, 2016 5:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting 
Fuel CCP (docker/k8s) kicked off

On Tue, Jul 26, 2016 at 4:44 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

[snip]

The issue is, as I see it, a parallel activity to one of the that is currently 
accepted into the Big Tent, aka Containerized Deployment

[snip]

This seems to be the crux of the matter as best as I can tell. Is it true to 
say that the concern is that Kolla believes they "own" the containerized 
deployment space inside the Big Tent?

Whether to have competing projects in the big tent was debated by the TC at the 
time and my recollection is that we decided that was a good thing -- if someone 
wanted to develop a Nova replacement, then let them do it in public with the 
community. It would either win or lose based on its merits. Why is this not 
something which can happen here as well?

I guess I should also point out that there is at least one other big tent 
deployment tool deploying containerized openstack components now, so its not 
like this idea is unique or new. Perhaps using kubernetes makes it different 
somehow, but I don't see it.

Michael




--
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][requirements] nomination period started

2016-07-27 Thread Tony Breeds
On Wed, Jul 27, 2016 at 08:41:17AM -0500, Matthew Thode wrote:
> We've started a period of self nomination in preparation for the
> requirements project fully moving into project (as it's still under Doug
> Hellmann).
> 
> We are gathering the self nominations here before we vote next week.
> https://etherpad.openstack.org/p/requirements-ptl-newton
> 
> Nominees should also send an email to the openstack-dev list.

I'd like to nominate for PTL of the, to be formed, requirements project.

For as long as I've been working on OpenStack the requirements data, code and
process has been managed as a sort of sub-team of release management with
strong overlaps to the stable branch team(s).  The workload has grown to the
point it needs it's own team with a PTL to manage priorities and reduce the
cross project pain points.

I feel like I understand the role of the requirements team and have advocated
that the requirements team should be more active in cross project issues.

The requirements team, like probably every other team, has a lot of debt.
We've worked hard to reduce that since Austin and I see that in the near future
we'll be able to tackle some of the bigger problems.

In rough priority order:
 - Improving communication, often decisions are made in the requirements team
   that affect many projects, I have a commitment to bringing more experts in
   for strategic reviews/discussions
 - cross-project testing for upper-constraints changes.  Once this is done
   it'll make breakage, like the recent oslo.context one, much harder to hit.
 - Work closely with the release managers as there is still a lot of common
   issues there.  In that a release of $project will trigger processes in the
   requirements team.
 - Getting openstack_requirements *code* to the point it can be installed as a
   library.  We've seen issues in the past where stable branches need largish
   backports to work correctly.  Really the *code* and *data* should be treated
   independently

I suspect that if you look at the review statistics for the requirements repo
I'm much lower than many of the core team.

I believe I'm eligible for nomination due to any of the following reviews.

https://review.openstack.org/#/q/project:openstack/requirements+status:merged+owner:tonyb+after:2016-04-07

IRC: tonyb

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [oslo] configs help text for glance_store

2016-07-27 Thread Nikhil Komawar
Hi all,


There was a intricate issue [1] when using help text with additive
strings for translations, when passed through oslo.config. The error
could not be seen on your local test environment or on glance_store docs
visibly as it reflected only on glance docs gate. (Well, unless you
tested the latest of glance_store with glance before commit/review.)


Thanks to Doug Hellmann, a solution for this has been proposed [2] for
that issue. However, until the next release and subsequent sync of
oslo.config in glance, I sincerely advice all the glance core reviews,
glance_store driver maintainers and all other respective individuals to
refrain from merging any help texts that have such 'additive strings'.


This may have an adverse effect on the glance gate and on further
releases of glance_store in Newton in this release critical period.
Also, note that the 0.14.0 release of glance_store had to be pinned down
[3] [4] and I've proposed a short release for the store lib 0.15.0 [5]
that will help us evaluate store ;  propose  & review changes therein in
a informed manner.


For those working on improving help text, I request you to test your
latest changes the way Doug has described testing oslo.config change in
a comment on PS1 here [2]. Please leave a comment on your review once
you have tested your config help text changes as such and help the
reviewers save time by allowing them to 'not' do the same locally for
every help text change proposed.


Hope that the severity of the issue, message and intent is clear to all
the audience concerned. If not, please feel free to reach out. Let's all
make sure we progress through the review queue in the most important
phase of the release cycle.


[1] https://bugs.launchpad.net/oslo.config/+bug/1605648

[2] https://review.openstack.org/#/c/347907/

[3] https://bugs.launchpad.net/glance-store/+bug/1606746

[4]
http://lists.openstack.org/pipermail/openstack-docs/2016-July/008910.html

[5] https://review.openstack.org/#/c/347621/

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt

2016-07-27 Thread Devdatta Kulkarni
Hi Sudipta,

There is another approach you can consider which does not need any changes to 
Nova.

The approach works as follows:
- Save the container image tar in Swift
- Generate a Swift tempURL for the container file
- Boot Nova vm and pass instructions for following steps through cloud init / 
user data
  - download the container file from Swift (wget)
  - load it (docker load)
  - run it (docker run)

We have implemented this approach in Solum (where we use Heat for deploying a 
VM and
then run application container on it  by providing above instructions through 
user_data of the HOT).

Thanks,
Devdatta


-


From: Sudipta Biswas 
Sent: Wednesday, July 27, 2016 9:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt
  
Premise:

While working with customers, we have realized:

- They want to use containers but are wary of using the same host kernel for 
multiple containers.
- They already have a significant investment (including skills) in OpenStack's 
Virtual Machine workflow and would like to re-use it as much as possible.
- They are very interested in using docker images.

There are some existing approaches like Hyper, Secure Containers workflows 
which already tries to address the first point. But we wanted to arrive at an 
approach that addresses all the above three in context of OpenStack Nova with 
minimalist changes.


Design Considerations:

We tried a few experiments with the present libvirt driver in nova to 
accomplish a work flow to deploy containers inside virtual machines in 
OpenStack via Nova.

The fundamental premise of our approach is to run a single container 
encapsulated in a single VM. This VM image just has a bare minimum operating 
system required to run it.
The container filesystem comes from the docker image.

We would like to get the feedback on the below approaches from the community 
before proposing this as a spec or blueprint.


Approach 1

User workflow:

1. The docker image is obtained in the form of a tar file.
2. Upload this tar file in glance. This support is already there in glance were 
a container-type of docker is supported.
3. Use this image along with nova libvirt driver to deploy a virtual machine.

Following are some of the changes to the OpenStack code that implements this 
approach:

1. Define a new conf parameter in nova called – 
base_vm_image=/var/lib/libvirt/images/baseimage.qcow2
This option is used to specify the base VM image.

2. define a new sub_virt_type = container in nova conf. Setting this parameter 
will ensure mounting of the container filesystem inside the VM.
Unless qemu and kvm are used as virt_type – this workflow will not work at this 
moment.

3. In the virt/libvirt/driver.py we do the following based on the sub_virt_type 
= container:

- We create a qcow2 disk from the base_vm_image and expose that 'disk' as the 
boot disk for the virtual machine.
 Note – this is very similar to a regular virtual machine boot minus the fact 
that the image is not downloaded from
glance but instead it is present on the host.


- We download the docker image into the /var/lib/nova/instances/_base directory 
and then for each new virtual machine boot – we create a new directory 
/var/lib/nova/instances/ as it's and copy the docker filesystem 
to it. Note – there are subsequent improvements to this idea that could be 
performed around the lines of using a union filesystem approach.
- The step above allows each virtual machine to have a different copy of the 
filesystem.
- We create a 'passthrough' mount of the filesystem via libvirt. This code is 
also present in the nova libvirt driver and we just trigger it based on our 
sub_virt_type parameter.

4. A cloud init – userdata is provided that looks somewhat like this:

runcmd:
  - mount -t 9p -o trans=virtio share_dir /mnt
  - chroot /mnt /bin/

The command_to_run is usually the entrypoint to for the docker image.

There could be better approaches to determine the entrypoint as well (say from 
docker image metadata).


Approach 2.

In this approach, the workflow remains the same as the first one with the 
exception that the
docker image is changed into a qcow2 image using a tool like virt-make-fs 
before uploading it to glance, instead of a tar file.

A tool like virt-make-fs can convert a tar file to a qcow2 image very easily.

This image is then downloaded on the compute node and a qcow2 disk is 
created/attached to the virtual machine that boots using the base_vm_image.


Approach 3

A custom qcow2 image is created using kernel, initramfs and the docker image 
and uploaded to glance.  No changes are needed in openstack nova. It boots as a 
regular VM.

Changes will be needed in image generation tools and will involve few 
additional tasks from an operator point of view.


I look forward to your comments/suggestions on the above.


Thanks,
Sudipto

    
_

[openstack-dev] [Cinder] Support for volume sharing across multiple VM's

2016-07-27 Thread Adam Lawson
I heard there's been some attention given to and progress made supporting
sharing a single volume with multiple VM's. Where are we along the
development curve and has anyone been able to get this to work?

Thanks!

//adam

*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-07-27 Thread Ben Nemec
On 07/27/2016 11:13 AM, Dougal Matthews wrote:
> 
> 
> On Wednesday, 27 July 2016, Steven Hardy  > wrote:
> 
> On Wed, Jul 27, 2016 at 02:08:08PM +0100, Dougal Matthews wrote:
> >On 27 July 2016 at 12:41, Honza Pokorny  > wrote:
> >
> >  Hello folks,
> >
> >  As the tripleo-ui project is quickly maturing, it might be
> time to start
> >  versioning our code.  As of now, the version is set to 0.0.1
> and that
> >  hardly reflects the state of the project.
> >
> >  What do you think?
> >
> >Yup, Sounds good to me! I would suggest that we make the Newton
> >release 1.0 and then continue from there. I am not sure what the
> >normal pattern is tho'
> 
> No, please don't invent an independent versioning scheme.  tripleo UI
> should be a tripleo deliverable, and part of the coordinated release
> (see
> my reply to Honza).
> 
> 
> Aha, sorry, I must have misunderstood how things are handled generally.
> I was remembering how tripleoclient was first versioned. Is there a
> document somewhere coving the process? All I can find is the release
> management wiki page.

We should be following semver: http://semver.org/

I don't know that tagging Newton as 1.0 would necessarily be wrong, it
just means we're committing to a stable API at that point (whatever that
means for the UI).

>  
> 
> 
> Thanks!
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [glance_store] Removing the S3 driver

2016-07-27 Thread Nikhil Komawar
Hi all,


Just wanted to follow up on the deprecation of S3 driver that Flavio
started [1], we are now in the phase of removing the S3 driver from
glance_store tree [2]. I've added some documentation to the release
notes but have a feeling that operators may be using more than that to
read up on glance_store updates. This was discussed a bit during the
glance-operators sync last month [3] too.


The plan is to release store as soon as this [2] review merges and after
currently proposed glance_store v0.15.0 release is out & tested on
glance gate. For now, the tentative release date with this change is
sometime mid-next-week so that this happens in Newton time-frame.


Either way, I just wanted to give a quick heads up and see if I should
be doing more courtesy additions, doc updates, etc. towards this.
Reviews on the proposal are welcome for the feedback as well.


[1] https://review.openstack.org/#/c/266077/

[2] https://review.openstack.org/#/c/347620/

[3] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync


Cheers,

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-test-cloud-rh2 local mirror server

2016-07-27 Thread Paul Belanger
On Wed, Jul 27, 2016 at 02:54:00PM +0100, Derek Higgins wrote:
> On 21 July 2016 at 23:04, Paul Belanger  wrote:
> > Greetings,
> >
> > I write today to see how I can remove this server from 
> > tripleo-test-cloud-rh2. I
> > have an open patch[1] currently to migrate tripleo-ci to use our AFS 
> > mirrors for
> > centos and epel.  However, I'm still struggling to see what else you are 
> > using
> > the local mirror for.
> >
> > From what I see, there appears to be some puppet modules in the mirror?
> >
> > The reason I am doing this work, is to help bring tripleo inline with
> > openstack-infra tooling.  There shouldn't be the need for a project to 
> > maintain
> > its own infrastructure outside of openstack-infra.  If so, I see that as 
> > some
> > sort of a failure between the project and openstack-infra.   And with that 
> > in
> > mind, I am here to help fix that.
> >
> > For the most part, I think we have everything currently in place to migrate 
> > away
> > from your locally mirror. I just need some help figuring what else is left 
> > and
> > then delete it.
> 
> Hi Paul,
> The mirror server hosts 3 sets of data used in CI long with a cron
> a job aimed at promoting trunk repositories,
> The first you've already mentioned, there is a list of puppet modules
> hosted here, we soon hope to move to packaged puppet modules so the
> need for this will go away.
> 
Ya, I was looking at an open review to rework this. If we moved these puppet
modules to tarballs over git repos, I think we could mirror them pretty easy
into our AFS mirrors.  Them being git repos requires more work because some
policies around git repos.

> The second is a mirror of the centos cloud images, these are updated
> hourly by the centos-cloud-images cronjob[1], I guess these could be
> easily replaced with the AFS server
> 
So 2 things here.

1) I've reached out to CentOS asking to enable rsync support on
http://cloud.centos.org/ if they do that, I can easily enable rsync for it.

2) What about moving away from the centos diskimage-builder element and switch
to centos-minimal element. I have an open review for this, but need help on
actually testing this.  It moves away from using the cloud image, and instead
uses yumdownloader to prebuild the images.

> Then we come to the parts where it will probably be more tricky to
> move away from our own server
> 
> o cached images - our nightly periodic jobs run tripleo ci with
> master/HEAD for all openstack projects (using the most recent rdo
> trunk repository), if the jobs pass then we upload the overcloud-full
> and ipa images to the mirror server along with logging what jobs
> passed, this happens at the end of toci_instack.sh[2], nothing else
> happens at this point the files are just uploaded nothing starts using
> them yet.
> 
I suggest we move this to tarballs.o.o for now, this is what other projects are
doing.  I believe we are also considering moving this process into AFS too.

> o promote script - hourly we then run the promote script[3], this
> script is whats responsible for the promotion of the master rdo
> repository that is used by tripleo ci (and devs), it checks to see if
> images have been updated to the mirror server by the periodic jobs,
> and if all of the jobs we care about (currently
> periodic-tripleo-ci-centos-7-ovb-ha
> periodic-tripleo-ci-centos-7-ovb-nonha[4]) passed then it does 2
> things
>   1. updates the current-tripleo link on the mirror server[5]
>   2. updates the current-tripleo link on the rdo trunk server[6]
> By doing this we ensure that the the current-tripleo link on the rdo
> trunk server is always pointing to something that has passed tripleo
> ci jobs, and that tripleo ci is using cached images that were built
> using this repository
> 
Okay, I think we need to dive more into this. It might be possible to make this
a post job or use mirror-update.openstack.org

> We've had to run this promote script on the mirror server as the
> individual jobs run independently and in oder to make the promote
> decision we needed somewhere that is aware of the status of all the
> jobs
> 
> Hope this answers your questions,
> Derek.
> 
> [1] - 
> http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/scripts/mirror-server/mirror-server.pp#n40
> [2] - 
> http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_instack.sh#n198
> [3] - 
> http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/scripts/mirror-server/promote.sh
> [4] - 
> http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/scripts/mirror-server/mirror-server.pp#n51
> [5] - http://8.43.87.241/builds/current-tripleo/
> [6] - 
> http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/
> 
> >
> > [1] https://review.openstack.org/#/c/326143/
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?

[openstack-dev] [glance] [glare] Glance virtual midcycle recordings

2016-07-27 Thread Nikhil Komawar
Hi all,


The Glance virtual midcycle was last month and we were able to record
the event and write meeting notes. Please see the "Recordings" (line 30
now) sub-title for audio/visual updates and meeting notes at the near
bottom of the etherpad linked below.


https://etherpad.openstack.org/p/newton-glance-virtual-midcycle


There have been some noted instances where people either inadvertently
or otherwise have updated the etherpad, thus removing the text. Please
be considerate and watchful of your keyboard when keeping the etherpad open.

But for the record, here's the content/links that can be accessed if you
are not a fan of the etherpad colors.

Recordings:
 
* Day 1 Recording (MP4) (143MB) - Download from
https://dl.dropboxusercontent.com/s/f5syv2d1gyjh8lm/GlanceVirtualMidcycleDay1.mp4
* Day 2 Glance Newton Midcycle Meetup - Glare
API _https://youtu.be/FgbgiaAFGxE_
* Day 2 GlanceSpecs discussionhttps://youtu.be/feiORKFiMI0

More permanent location of the meeting details:
http://paste.openstack.org/show/542667/

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Chris Friesen

On 07/27/2016 09:59 AM, Ed Leafe wrote:

On Jul 27, 2016, at 10:51 AM, Joshua Harlow  wrote:


Whether to have competing projects in the big tent was debated by the TC
at the time and my recollection is that we decided that was a good thing
-- if someone wanted to develop a Nova replacement, then let them do it
in public with the community. It would either win or lose based on its
merits. Why is this not something which can happen here as well?


For real, I (or someone) can start a nova replacement without getting rejected 
(or yelled at or ...) by the TC saying it's a competing project??? Wow, this is 
news to me...


No, you can’t start a Nova replacement and still call yourself OpenStack.

The sense I have gotten over the years from the TC is that gratuitous 
competition is strongly discouraged.


I seem to recall that back during the "big tent" discussion people were talking 
about allowing competing projects that performed the same task, and letting 
natural selection decide which one survived.


For example, at 
"http://www.joinfu.com/2014/09/answering-the-existential-question-in-openstack/"; 
Jay Pipes said that being under the big tent should not mean that the project is 
the only/best way to provide a specific function to OpenStack users.


On the other hand, the OpenStack new projects requirements *do* explicitly state 
that "Where it makes sense, the project cooperates with existing projects rather 
than gratuitously competing or reinventing the wheel."


Maybe it boils down to the definition of "gratuitous" competition.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Questions regarding Kuryr

2016-07-27 Thread Vikas Choudhary
Hi Amir,

Thank You for showing interest in Kuryr!!!

One simple approach could be:
1> Have neutron and keystone running.
2> git clone kuryr-libnetwork repo
 and follow Readme
instructions to install kuryr.

Hope this helps. If you still face issues, please feel free to reach out to
us on irc channel, #openstack-kuryr


Thanks & Regards
Vikas

On Wed, Jul 27, 2016 at 10:21 PM, Amir, Shai  wrote:

> Hi,
>
>
>
> I saw that you are one of the leading contributors to Kuryr and was
> wondering if you can point me to the right direction.
>
> I am trying to get Kuryr working on a simple mitaka based devstack install
> along with docker.
>
>
>
> Any pointers to how to get this installed and working will be highly
> appreciated.
>
>
>
> Best regards,
>
> Shai
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] glance developers and operators midcycle sync -- recordings

2016-07-27 Thread Nikhil Komawar
Hi all,


We'd a midcycle sync last month with the glance development team and
operators team where different individuals from across varied time zones
participated. While the scheduling was a challenge and some had to join
too early or too late, the event was pretty successful. We'd a ton of
productive discussions for the first time event of such nature. I hope
that we will continue to keep this collaboration going and maintain a
cadence between development and operators.


Thanks to Kris for letting us use a tool that let's recordings possible.
I've managed to trans-code the audio/visual into a youtube video with
the chat transcripts posted as a paste in the description there.


The audio/video recording is available at:
_https://youtu.be/DuSvm92iscM_

Chat transcript available at:   http://paste.openstack.org/show/542469/




Please find the full details of the event including notes on the various
topics at:
https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync


As always, feel free to reach out if any questions or comments. Cheers!

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Ed Leafe
On Jul 27, 2016, at 12:10 PM, Chris Friesen  wrote:

> Maybe it boils down to the definition of "gratuitous" competition.

Precisely, which is why we have humans and not computer algorithms to decide 
these things.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Jay Pipes

On 07/27/2016 10:10 AM, Chris Friesen wrote:

On 07/27/2016 09:59 AM, Ed Leafe wrote:

On Jul 27, 2016, at 10:51 AM, Joshua Harlow 
wrote:


Whether to have competing projects in the big tent was debated by
the TC
at the time and my recollection is that we decided that was a good
thing
-- if someone wanted to develop a Nova replacement, then let them do it
in public with the community. It would either win or lose based on its
merits. Why is this not something which can happen here as well?


For real, I (or someone) can start a nova replacement without getting
rejected (or yelled at or ...) by the TC saying it's a competing
project??? Wow, this is news to me...


No, you can’t start a Nova replacement and still call yourself OpenStack.

The sense I have gotten over the years from the TC is that gratuitous
competition is strongly discouraged.


I seem to recall that back during the "big tent" discussion people were
talking about allowing competing projects that performed the same task,
and letting natural selection decide which one survived.

For example, at
"http://www.joinfu.com/2014/09/answering-the-existential-question-in-openstack/";
Jay Pipes said that being under the big tent should not mean that the
project is the only/best way to provide a specific function to OpenStack
users.

On the other hand, the OpenStack new projects requirements *do*
explicitly state that "Where it makes sense, the project cooperates with
existing projects rather than gratuitously competing or reinventing the
wheel."

Maybe it boils down to the definition of "gratuitous" competition.


For the record I think I've always been clear that I don't see 
competition as a bad thing within the OpenStack ecosystem however I have 
always been a proponent of having a *single consistent REST API* for a 
particular service type. I think innovation should happen at the 
implementation layer, but the public HTTP APIs should be collated and 
reviewed for overlap and inconsistencies.


This was why in the past I haven't raised a stink about multiple 
deployment tools, since there was no OpenStack HTTP API for deployment 
of OpenStack itself. But I have absolutely raised concerns over overlap 
of HTTP APIs, like is the case with Monasca and various Telemetry 
project APIs. Again, implementation diversity cool. Public HTTP API 
diversity, not cool.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Fox, Kevin M
Kolla is providing a public api for docker containers and kubernetes templates 
though. So its not just a deployment tool issue. Its not specifically rest, but 
does that matter?

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Wednesday, July 27, 2016 10:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting 
Fuel CCP (docker/k8s) kicked off

On 07/27/2016 10:10 AM, Chris Friesen wrote:
> On 07/27/2016 09:59 AM, Ed Leafe wrote:
>> On Jul 27, 2016, at 10:51 AM, Joshua Harlow 
>> wrote:
>>
 Whether to have competing projects in the big tent was debated by
 the TC
 at the time and my recollection is that we decided that was a good
 thing
 -- if someone wanted to develop a Nova replacement, then let them do it
 in public with the community. It would either win or lose based on its
 merits. Why is this not something which can happen here as well?
>>>
>>> For real, I (or someone) can start a nova replacement without getting
>>> rejected (or yelled at or ...) by the TC saying it's a competing
>>> project??? Wow, this is news to me...
>>
>> No, you can’t start a Nova replacement and still call yourself OpenStack.
>>
>> The sense I have gotten over the years from the TC is that gratuitous
>> competition is strongly discouraged.
>
> I seem to recall that back during the "big tent" discussion people were
> talking about allowing competing projects that performed the same task,
> and letting natural selection decide which one survived.
>
> For example, at
> "http://www.joinfu.com/2014/09/answering-the-existential-question-in-openstack/";
> Jay Pipes said that being under the big tent should not mean that the
> project is the only/best way to provide a specific function to OpenStack
> users.
>
> On the other hand, the OpenStack new projects requirements *do*
> explicitly state that "Where it makes sense, the project cooperates with
> existing projects rather than gratuitously competing or reinventing the
> wheel."
>
> Maybe it boils down to the definition of "gratuitous" competition.

For the record I think I've always been clear that I don't see
competition as a bad thing within the OpenStack ecosystem however I have
always been a proponent of having a *single consistent REST API* for a
particular service type. I think innovation should happen at the
implementation layer, but the public HTTP APIs should be collated and
reviewed for overlap and inconsistencies.

This was why in the past I haven't raised a stink about multiple
deployment tools, since there was no OpenStack HTTP API for deployment
of OpenStack itself. But I have absolutely raised concerns over overlap
of HTTP APIs, like is the case with Monasca and various Telemetry
project APIs. Again, implementation diversity cool. Public HTTP API
diversity, not cool.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kolla-kubernetes]

2016-07-27 Thread Ryan Hallisey
Hi all,

The kolla-kubernetes project is in need of introducing some changes into
kolla's globals.yml.  This could quickly become an issue because the community
does not want to have too many variables exposed and create a mess for any
backports.

In order to get kolla-kubernetes unblocked, kolla-kubernetes users will need
to copy/paste variables into the globals.yml.  This will be documented on the
kolla-kubernetes side. The Kolla config will be setup to pickup these vars
and make any changes.  This solution will work for the time being until Kolla
reaches the conclusion of the N cycle and the community evaluates a repo split
and a config split.

https://review.openstack.org/#/c/327925/

Thanks,
Ryan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][requirements] Re: [Openstack-stable-maint] Stable check of openstack/heat failed

2016-07-27 Thread Tony Breeds
On Wed, Jul 27, 2016 at 02:20:38PM +0800, Ethan Lynn wrote:
> Hi Tony,
>   I submit a patch to use upper-constraints for review,
>   https://review.openstack.org/#/c/347639/
>    . Let’s wait for the feedback
>   and results.

Thanks.  I see that you have reviews for master, mitaka and liberty.  Thanks 
for doign that.

Once the mast patch merges let me know and I'll help approve the stable patches

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt

2016-07-27 Thread Maxime Belanger
+1 on this,


Still you loose all the great stuff about the containers but it is a first step 
towards native container orchestration platform.


From: Devdatta Kulkarni 
Sent: July 27, 2016 12:21:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][rfc] Booting docker images using nova 
libvirt

Hi Sudipta,

There is another approach you can consider which does not need any changes to 
Nova.

The approach works as follows:
- Save the container image tar in Swift
- Generate a Swift tempURL for the container file
- Boot Nova vm and pass instructions for following steps through cloud init / 
user data
  - download the container file from Swift (wget)
  - load it (docker load)
  - run it (docker run)

We have implemented this approach in Solum (where we use Heat for deploying a 
VM and
then run application container on it  by providing above instructions through 
user_data of the HOT).

Thanks,
Devdatta


-


From: Sudipta Biswas 
Sent: Wednesday, July 27, 2016 9:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt

Premise:

While working with customers, we have realized:

- They want to use containers but are wary of using the same host kernel for 
multiple containers.
- They already have a significant investment (including skills) in OpenStack's 
Virtual Machine workflow and would like to re-use it as much as possible.
- They are very interested in using docker images.

There are some existing approaches like Hyper, Secure Containers workflows 
which already tries to address the first point. But we wanted to arrive at an 
approach that addresses all the above three in context of OpenStack Nova with 
minimalist changes.


Design Considerations:

We tried a few experiments with the present libvirt driver in nova to 
accomplish a work flow to deploy containers inside virtual machines in 
OpenStack via Nova.

The fundamental premise of our approach is to run a single container 
encapsulated in a single VM. This VM image just has a bare minimum operating 
system required to run it.
The container filesystem comes from the docker image.

We would like to get the feedback on the below approaches from the community 
before proposing this as a spec or blueprint.


Approach 1

User workflow:

1. The docker image is obtained in the form of a tar file.
2. Upload this tar file in glance. This support is already there in glance were 
a container-type of docker is supported.
3. Use this image along with nova libvirt driver to deploy a virtual machine.

Following are some of the changes to the OpenStack code that implements this 
approach:

1. Define a new conf parameter in nova called – 
base_vm_image=/var/lib/libvirt/images/baseimage.qcow2
This option is used to specify the base VM image.

2. define a new sub_virt_type = container in nova conf. Setting this parameter 
will ensure mounting of the container filesystem inside the VM.
Unless qemu and kvm are used as virt_type – this workflow will not work at this 
moment.

3. In the virt/libvirt/driver.py we do the following based on the sub_virt_type 
= container:

- We create a qcow2 disk from the base_vm_image and expose that 'disk' as the 
boot disk for the virtual machine.
 Note – this is very similar to a regular virtual machine boot minus the fact 
that the image is not downloaded from
glance but instead it is present on the host.


- We download the docker image into the /var/lib/nova/instances/_base directory 
and then for each new virtual machine boot – we create a new directory 
/var/lib/nova/instances/ as it's and copy the docker filesystem 
to it. Note – there are subsequent improvements to this idea that could be 
performed around the lines of using a union filesystem approach.
- The step above allows each virtual machine to have a different copy of the 
filesystem.
- We create a 'passthrough' mount of the filesystem via libvirt. This code is 
also present in the nova libvirt driver and we just trigger it based on our 
sub_virt_type parameter.

4. A cloud init – userdata is provided that looks somewhat like this:

runcmd:
  - mount -t 9p -o trans=virtio share_dir /mnt
  - chroot /mnt /bin/

The command_to_run is usually the entrypoint to for the docker image.

There could be better approaches to determine the entrypoint as well (say from 
docker image metadata).


Approach 2.

In this approach, the workflow remains the same as the first one with the 
exception that the
docker image is changed into a qcow2 image using a tool like virt-make-fs 
before uploading it to glance, instead of a tar file.

A tool like virt-make-fs can convert a tar file to a qcow2 image very easily.

This image is then downloaded on the compute node and a qcow2 disk is 
created/attached to the virtual machine that boots using the base_vm_image.


Approach 3

A custom qcow2 image is created using kernel, i

[openstack-dev] [sahara] using 'sahara' cli commands is deprecated, how widely it's used?

2016-07-27 Thread Vitaly Gridnev
Hello,

In Mitaka release sahara team marked old cli commands starting with
'sahara' as deprecated, since new openstackclient plugin was implemented
('openstack dataprocessing'), and it has all features that old cli has.

The question is the following: is there something that can stop us from
removing old CLI from saharaclient code in Newton or Ocata (which is
probably much better choice)?

Would love to see feedback on topic.

-- 
Best Regards,
Vitaly Gridnev,
Project Technical Lead of OpenStack DataProcessing Program (Sahara)
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Independent tag and stable branches

2016-07-27 Thread Jeremy Stanley
On 2016-07-27 14:46:18 +0200 (+0200), Julien Danjou wrote:
> If I understand correctly, I think this is what we do to solve what you
> describe:
> 
>   
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/gnocchi.yaml#n25

Yep, I missed that when skimming your jobs, thanks!

Though that points out that you have a very incomplete mapping
defined, so presumably you've only added each of those entries after
discovering that an attempted backport failed tests.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][neutron] neutron-lib 0.3.0 release (newton)

2016-07-27 Thread no-reply
We are high-spirited to announce the release of:

neutron-lib 0.3.0: Neutron shared routines and utilities

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/neutron-lib

With package available at:

https://pypi.python.org/pypi/neutron-lib

Please report issues through launchpad:

http://bugs.launchpad.net/neutron

For more details, please see below.

Changes in neutron-lib 0.2.0..0.3.0
---

1d44493 Remove discover from test-requirements
37c5a03 Add validator to test integers
7c09268 Deprecate N523 check that forbids oslo.* imports
0316d00 devref for public API docstring
cf874bf Migration report: validate that bc is installed
23aea4e add tags to api-ref files for the content verification phase
9dc6770 Add tool to track migration to neutron-lib
5cdbb04 Document release steps for neutron-lib
5f4af17 Expand the API reference Table of Content
911c1ac Updated from global requirements
7875c52 Fix simple typo
de11a26 Tweak validation logic for subport validator
646d6f1 Updated from global requirements
9157ed5 Update documents to address some issues
159e04e Updated from global requirements
64991fd Rehome IPV6_MODES constants
3fcd939 Update validator accessors
112eef6 Forbid eventlet based code
6f09e4d Make the constant Sentinel() class public
84491d2 100% unit test coverage for hacking/checks.py
ba717a0 Localized exception message hacking check
0a6a347 Updated from global requirements
0ac922e WADL to RST migration
b82347d Add translation validations to the hacking policy
695eccf Updated from global requirements
e419f24 Fix E128 hacking errors and enable it
1cb7708 TrivialFix: Fix a bad indentation in a doc file
142c2b7 Enable local hacking rule in neutron-lib
4031e12 Hacking: update iteritems hacking message
c607b44 Add Neutron L3 agent types
e336158 Fix exception for invalid type
f54a138 Add subport validator for vlan-aware-vms
ea2bcdd Updated from global requirements
445e74d Remove unused oslo.service requirement
bb13c50 Fixed type:dict validator passes unexpected keys


Diffstat (except docs and test files)
-

.gitignore |1 +
HACKING.rst|6 +-
api-ref/source/conf.py |  222 ++
api-ref/source/index.rst   |9 +
.../extensions/extension-show-response.json|9 +
.../extensions/extensions-list-response.json   |  123 +
.../samples/firewalls/firewall-create-request.json |6 +
.../firewalls/firewall-create-response.json|   14 +
.../firewalls/firewall-policies-list-response.json |   15 +
.../firewalls/firewall-policy-create-request.json  |8 +
.../firewalls/firewall-policy-create-response.json |   13 +
.../firewall-policy-insert-rule-request.json   |5 +
.../firewall-policy-insert-rule-response.json  |   14 +
.../firewall-policy-remove-rule-request.json   |3 +
.../firewall-policy-remove-rule-response.json  |   13 +
.../firewalls/firewall-policy-show-response.json   |   13 +
.../firewalls/firewall-policy-update-request.json  |8 +
.../firewalls/firewall-policy-update-response.json |   14 +
.../firewalls/firewall-rule-create-request.json|9 +
.../firewalls/firewall-rule-create-response.json   |   19 +
.../firewalls/firewall-rule-show-response.json |   19 +
.../firewalls/firewall-rule-update-request.json|5 +
.../firewalls/firewall-rule-update-response.json   |   19 +
.../firewalls/firewall-rules-list-response.json|   21 +
.../samples/firewalls/firewall-show-response.json  |   14 +
.../samples/firewalls/firewall-update-request.json |5 +
.../firewalls/firewall-update-response.json|   14 +
.../samples/firewalls/firewalls-list-response.json |   16 +
.../samples/flavors/flavor-associate-request.json  |5 +
.../samples/flavors/flavor-associate-response.json |5 +
.../samples/flavors/flavor-create-request.json |8 +
.../samples/flavors/flavor-create-response.json|   10 +
.../samples/flavors/flavor-show-response.json  |   10 +
.../samples/flavors/flavor-update-request.json |7 +
.../samples/flavors/flavor-update-response.json|   10 +
.../samples/flavors/flavors-list-response.json |   12 +
.../flavors/service-profile-create-request.json|8 +
.../flavors/service-profile-create-response.json   |9 +
.../flavors/service-profile-show-response.json |9 +
.../flavors/service-profile-update-request.json|8 +
.../flavors/service-profile-update-response.json   |9 +
.../flavors/service-profiles-list-response.json|   18 +
.../lbaas/healthmonitor-associate-request.json |5 +
.../lbaas/healthmonitor-associate-response.json|3 +
.../lbaas/healthmonitor-create-request.json|   12 +
.../lbaas/healthmonitor-create-response.json   |   15 +
.../samples/lbaas/healthmonitor-show-response.j

Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testing core

2016-07-27 Thread Terry Wilson
On Tue, Jul 26, 2016 at 10:04 AM, Jakub Libosvar  wrote:
> On 26/07/16 16:56, Assaf Muller wrote:
>>
>> We've hit critical mass from cores interesting in the testing area.
>>
>> Welcome Jakub to the core reviewer team. May you enjoy staring at the
>> Gerrit interface and getting yelled at by people... It's a glamorous
>> life.
>
>
> Thanks everyone for support! I'll try to do my best :)

Congrats!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Jay Pipes

On 07/27/2016 01:59 PM, Fox, Kevin M wrote:

Kolla is providing a public api for docker containers and kubernetes templates 
though. So its not just a deployment tool issue. Its not specifically rest, but 
does that matter?


Yes, it matters.

Kolla isn't providing a user-interfacing HTTP API for doing something in 
a cloud. Kolla is providing a prescriptive way of building Docker images 
from a set of Dockerfiles and various configuration file templates. That 
isn't a consumable API. That's a reference manual.


Best,
-jay



From: Jay Pipes [jaypi...@gmail.com]
Sent: Wednesday, July 27, 2016 10:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting 
Fuel CCP (docker/k8s) kicked off

On 07/27/2016 10:10 AM, Chris Friesen wrote:

On 07/27/2016 09:59 AM, Ed Leafe wrote:

On Jul 27, 2016, at 10:51 AM, Joshua Harlow 
wrote:


Whether to have competing projects in the big tent was debated by
the TC
at the time and my recollection is that we decided that was a good
thing
-- if someone wanted to develop a Nova replacement, then let them do it
in public with the community. It would either win or lose based on its
merits. Why is this not something which can happen here as well?


For real, I (or someone) can start a nova replacement without getting
rejected (or yelled at or ...) by the TC saying it's a competing
project??? Wow, this is news to me...


No, you can’t start a Nova replacement and still call yourself OpenStack.

The sense I have gotten over the years from the TC is that gratuitous
competition is strongly discouraged.


I seem to recall that back during the "big tent" discussion people were
talking about allowing competing projects that performed the same task,
and letting natural selection decide which one survived.

For example, at
"http://www.joinfu.com/2014/09/answering-the-existential-question-in-openstack/";
Jay Pipes said that being under the big tent should not mean that the
project is the only/best way to provide a specific function to OpenStack
users.

On the other hand, the OpenStack new projects requirements *do*
explicitly state that "Where it makes sense, the project cooperates with
existing projects rather than gratuitously competing or reinventing the
wheel."

Maybe it boils down to the definition of "gratuitous" competition.


For the record I think I've always been clear that I don't see
competition as a bad thing within the OpenStack ecosystem however I have
always been a proponent of having a *single consistent REST API* for a
particular service type. I think innovation should happen at the
implementation layer, but the public HTTP APIs should be collated and
reviewed for overlap and inconsistencies.

This was why in the past I haven't raised a stink about multiple
deployment tools, since there was no OpenStack HTTP API for deployment
of OpenStack itself. But I have absolutely raised concerns over overlap
of HTTP APIs, like is the case with Monasca and various Telemetry
project APIs. Again, implementation diversity cool. Public HTTP API
diversity, not cool.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][neutron] python-neutronclient 5.0.0 release (newton)

2016-07-27 Thread no-reply
We are high-spirited to announce the release of:

python-neutronclient 5.0.0: CLI and Client Library for OpenStack
Networking

This release is part of the newton release series.

With source available at:

https://git.openstack.org/cgit/openstack/python-neutronclient

With package available at:

https://pypi.python.org/pypi/python-neutronclient

Please report issues through launchpad:

https://bugs.launchpad.net/python-neutronclient

For more details, please see below.

5.0.0
^


Deprecation Notes
*

* Keystone v3 support for CLI

  * Using 'tenant_id' and 'tenant_name' arguments in API bindings is
deprecated. Use 'project_id' and 'project_name' arguments instead.


Bug Fixes
*

* CLI support to set QoS policy as not shared if it was shared
  before. The "qos-policy-update" command include a "--no-shared"
  option. Closes bug 1590942 (https://bugs.launchpad.net/python-
  neutronclient/+bug/1590942).

Changes in python-neutronclient 4.2.0..5.0.0


ec20f7f Fix string interpolation at logging call
3b1c538 Updated from global requirements
6bc4685 Add functional test hook for fwaas command
6ba4f31 HAProxy uses milliseconds for its timeout values.
d63a92a Base OSC plugin support
1d7c992 Updated from global requirements
0cbd30b Make USER_AGENT variable global
3832d53 Trivial: missing comma in the docs
1828552 Fixed --insecure not taking effect when specified
e917f21 Fix the problem of mox in test_shell.py
f3bea7e Updated from global requirements
8585c14 Trivial Fix: Fix typo
5f079fe improve readme contents
521ff7c Add no-shared option to qos-policy-update command
6d5356a Updated from global requirements
81a3d1f Add in missing translations
bbb7a88 Trivial: ignore openstack/common in flake8 exclude list
343e4b1 Update for API bindings
925d44a Remove unnecessary executable permissions
53a59e5 Updated from global requirements
954375b Update tempest_lib to tempest.lib
78d778c Constraint tox targets with upper-constraints.txt
ea0dfb1 Make purge supports dvr router's interface
35ce1a5 Switched from fixtures.MonkeyPatch to mock.patch
9e4f826 tests: removed mocking for Client.get_attr_metadata
51f07b8 Update the home-page with developer documentation
a065d20 Address pairs help missing space
2e048fd Devref: Add dynamic routing to OSC transition
04cf26d Updated from global requirements
98fc6c5 Updated from global requirements
b16bc6c Support sha256 for vpn-ikepolicy and vpn-ipsecpolicy
37ec942 Fixes unclear error when no --pool-prefix given
feba9bb Updated from global requirements
0927632 Added missing help text for 'purge' command
84aebc2 Fix random failure of security group unit test
d453846 Remove the last remaining vendor code
270da35 Update help information for lbaasv2 CLIs
3faf02f Devref: Newton updates for transition to OSC
6c82731 Devref Update: Transition to OpenStack Client
84cd3c4 Fix duplicate entries in list_columns while extending the list
9287040 Remove unnecessary entry from old relnotes


Diffstat (except docs and test files)
-

README.rst |  29 ++-
neutronclient/client.py|  36 +--
neutronclient/common/clientmanager.py  |  18 +-
neutronclient/neutron/client.py|   2 +-
neutronclient/neutron/v2_0/address_scope.py|   0
neutronclient/neutron/v2_0/agentscheduler.py   |   3 +-
.../neutron/v2_0/auto_allocated_topology.py|   0
neutronclient/neutron/v2_0/bgp/speaker.py  |   0
neutronclient/neutron/v2_0/lb/healthmonitor.py |   7 +-
neutronclient/neutron/v2_0/lb/v2/healthmonitor.py  | 102 
neutronclient/neutron/v2_0/lb/v2/listener.py   |  79 +++---
neutronclient/neutron/v2_0/lb/v2/loadbalancer.py   |  45 +++-
neutronclient/neutron/v2_0/lb/v2/member.py |  51 ++--
neutronclient/neutron/v2_0/lb/v2/pool.py   |  89 ---
neutronclient/neutron/v2_0/nsx/__init__.py |   0
neutronclient/neutron/v2_0/nsx/networkgateway.py   | 265 -
neutronclient/neutron/v2_0/nsx/qos_queue.py|  82 ---
neutronclient/neutron/v2_0/port.py |   2 +-
neutronclient/neutron/v2_0/purge.py|   5 +-
neutronclient/neutron/v2_0/qos/__init__.py |   0
neutronclient/neutron/v2_0/qos/policy.py   |  13 +-
neutronclient/neutron/v2_0/subnet.py   |   6 +-
neutronclient/neutron/v2_0/subnetpool.py   |   9 +-
neutronclient/neutron/v2_0/vpn/ikepolicy.py|   2 +-
neutronclient/neutron/v2_0/vpn/ipsecpolicy.py  |   2 +-
neutronclient/osc/__init__.py  |   0
neutronclient/osc/plugin.py|  61 +
neutronclient/osc/v2/__init__.py   |   0
neutronclient/osc/v2/dynamic_routing/__init__.py   |   0
neutronclient/osc/v2/fwaas/__init__.py |   0
neutronclient/osc/v2/lbaas/__init__.py |   0
neutronclient/o

Re: [openstack-dev] [Neutron] Project mascot - propose your choice/cast your vote

2016-07-27 Thread Armando M.
On 25 July 2016 at 10:52, Armando M.  wrote:

> On 14 July 2016 at 10:00, Armando M.  wrote:
>
>> Hi Neutrinos,
>>
>> Based on proposal [1], I prepared an etherpad to allow us to choose
>> collaboratively a set of candidates for our mascot. Propose/vote away on
>> [2]. You have time until Friday, July 22nd.
>>
>
> The deadline has passed, we have now a list of selected candidates to
> choose from.
>
> Please cast your vote [1]!
>
> Cheers,
> Armando
>
> [1]
> https://docs.google.com/forms/d/e/1FAIpQLSevnzF9z4a9jiXy8w8MRvvmXVmexK5QCxphOoFaOhBuaj9INw/viewform?c=0&w=1&usp=mail_form_link
>


Today is the deadline to submit mascot candidates for priority
consideration to Heidi Joy @Foundation. If you have not voted so far and
would like to, please do. I will close the poll [2] by the EOB (PST
timezone), and submit the outcome of the poll later in the day.

Cheers,
Armando

[1] http://www.openstack.org/project-mascots

[2]
https://docs.google.com/forms/d/e/1FAIpQLSevnzF9z4a9jiXy8w8MRvvmXVmexK5QCxphOoFaOhBuaj9INw/viewform?c=0&w=1&usp=mail_form_link


>
>
>>
>> After the deadline the most voted ones (depending on the number) will be
>> sent to Heidi Joy @Foundation for the next step in the selection process.
>>
>> Feel free to reach out if you have any questions/suggestions.
>>
>> Happy hacking!
>> Armando
>>
>> [1] http://www.openstack.org/project-mascots
>> [2] https://etherpad.openstack.org/p/neutron-project-mascot
>>
>
> Today was the deadline for
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SFC stable/mitaka version

2016-07-27 Thread Tony Breeds
On Wed, Jul 06, 2016 at 12:40:48PM +, Gary Kotton wrote:
> Hi,
> Is anyone looking at creating a stable/mitaka version? What if someone want
> to use this for stable/mitaka?

If that's a thing you need it's a matter of Armando asking the release managers
to create it.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Steven Dake (stdake)
One correction inside:

On 7/27/16, 12:02 PM, "Jay Pipes"  wrote:

>On 07/27/2016 01:59 PM, Fox, Kevin M wrote:
>> Kolla is providing a public api for docker containers and kubernetes
>>templates though. So its not just a deployment tool issue. Its not
>>specifically rest, but does that matter?
>
>Yes, it matters.
>
>Kolla isn't providing a user-interfacing HTTP API for doing something in
>a cloud. Kolla is providing a prescriptive way of building Docker images
>from a set of Dockerfiles and various configuration file templates. That
>isn't a consumable API. That's a reference manual.
>
>Best,
>-jay

Not that I think this discussion is all that productive but it should be
based on facts.  Kolla container images do provide a standardized
consumable ABI and we have claimed such for over two cycles.

Regards
-steve

>
>> 
>> From: Jay Pipes [jaypi...@gmail.com]
>> Sent: Wednesday, July 27, 2016 10:36 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is
>>getting Fuel CCP (docker/k8s) kicked off
>>
>> On 07/27/2016 10:10 AM, Chris Friesen wrote:
>>> On 07/27/2016 09:59 AM, Ed Leafe wrote:
 On Jul 27, 2016, at 10:51 AM, Joshua Harlow 
 wrote:

>> Whether to have competing projects in the big tent was debated by
>> the TC
>> at the time and my recollection is that we decided that was a good
>> thing
>> -- if someone wanted to develop a Nova replacement, then let them
>>do it
>> in public with the community. It would either win or lose based on
>>its
>> merits. Why is this not something which can happen here as well?
>
> For real, I (or someone) can start a nova replacement without getting
> rejected (or yelled at or ...) by the TC saying it's a competing
> project??? Wow, this is news to me...

 No, you can¹t start a Nova replacement and still call yourself
OpenStack.

 The sense I have gotten over the years from the TC is that gratuitous
 competition is strongly discouraged.
>>>
>>> I seem to recall that back during the "big tent" discussion people were
>>> talking about allowing competing projects that performed the same task,
>>> and letting natural selection decide which one survived.
>>>
>>> For example, at
>>> 
>>>"http://www.joinfu.com/2014/09/answering-the-existential-question-in-ope
>>>nstack/"
>>> Jay Pipes said that being under the big tent should not mean that the
>>> project is the only/best way to provide a specific function to
>>>OpenStack
>>> users.
>>>
>>> On the other hand, the OpenStack new projects requirements *do*
>>> explicitly state that "Where it makes sense, the project cooperates
>>>with
>>> existing projects rather than gratuitously competing or reinventing the
>>> wheel."
>>>
>>> Maybe it boils down to the definition of "gratuitous" competition.
>>
>> For the record I think I've always been clear that I don't see
>> competition as a bad thing within the OpenStack ecosystem however I have
>> always been a proponent of having a *single consistent REST API* for a
>> particular service type. I think innovation should happen at the
>> implementation layer, but the public HTTP APIs should be collated and
>> reviewed for overlap and inconsistencies.
>>
>> This was why in the past I haven't raised a stink about multiple
>> deployment tools, since there was no OpenStack HTTP API for deployment
>> of OpenStack itself. But I have absolutely raised concerns over overlap
>> of HTTP APIs, like is the case with Monasca and various Telemetry
>> project APIs. Again, implementation diversity cool. Public HTTP API
>> diversity, not cool.
>>
>> Best,
>> -jay
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-27 Thread Fox, Kevin M
Its not an "end user" facing thing, but it is an "operator" facing thing.

I deploy kolla containers today on non kolla managed systems in production, and 
rely on that api being consistent.

I'm positive I'm not the only operator doing this either. This sounds like a 
consumable api to me.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Wednesday, July 27, 2016 12:02 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting 
Fuel CCP (docker/k8s) kicked off

On 07/27/2016 01:59 PM, Fox, Kevin M wrote:
> Kolla is providing a public api for docker containers and kubernetes 
> templates though. So its not just a deployment tool issue. Its not 
> specifically rest, but does that matter?

Yes, it matters.

Kolla isn't providing a user-interfacing HTTP API for doing something in
a cloud. Kolla is providing a prescriptive way of building Docker images
from a set of Dockerfiles and various configuration file templates. That
isn't a consumable API. That's a reference manual.

Best,
-jay

> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: Wednesday, July 27, 2016 10:36 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is 
> getting Fuel CCP (docker/k8s) kicked off
>
> On 07/27/2016 10:10 AM, Chris Friesen wrote:
>> On 07/27/2016 09:59 AM, Ed Leafe wrote:
>>> On Jul 27, 2016, at 10:51 AM, Joshua Harlow 
>>> wrote:
>>>
> Whether to have competing projects in the big tent was debated by
> the TC
> at the time and my recollection is that we decided that was a good
> thing
> -- if someone wanted to develop a Nova replacement, then let them do it
> in public with the community. It would either win or lose based on its
> merits. Why is this not something which can happen here as well?

 For real, I (or someone) can start a nova replacement without getting
 rejected (or yelled at or ...) by the TC saying it's a competing
 project??? Wow, this is news to me...
>>>
>>> No, you can’t start a Nova replacement and still call yourself OpenStack.
>>>
>>> The sense I have gotten over the years from the TC is that gratuitous
>>> competition is strongly discouraged.
>>
>> I seem to recall that back during the "big tent" discussion people were
>> talking about allowing competing projects that performed the same task,
>> and letting natural selection decide which one survived.
>>
>> For example, at
>> "http://www.joinfu.com/2014/09/answering-the-existential-question-in-openstack/";
>> Jay Pipes said that being under the big tent should not mean that the
>> project is the only/best way to provide a specific function to OpenStack
>> users.
>>
>> On the other hand, the OpenStack new projects requirements *do*
>> explicitly state that "Where it makes sense, the project cooperates with
>> existing projects rather than gratuitously competing or reinventing the
>> wheel."
>>
>> Maybe it boils down to the definition of "gratuitous" competition.
>
> For the record I think I've always been clear that I don't see
> competition as a bad thing within the OpenStack ecosystem however I have
> always been a proponent of having a *single consistent REST API* for a
> particular service type. I think innovation should happen at the
> implementation layer, but the public HTTP APIs should be collated and
> reviewed for overlap and inconsistencies.
>
> This was why in the past I haven't raised a stink about multiple
> deployment tools, since there was no OpenStack HTTP API for deployment
> of OpenStack itself. But I have absolutely raised concerns over overlap
> of HTTP APIs, like is the case with Monasca and various Telemetry
> project APIs. Again, implementation diversity cool. Public HTTP API
> diversity, not cool.
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.opensta

Re: [openstack-dev] [Neutron][networking-ovn][networking-odl] Syncing neutron DB and OVN DB

2016-07-27 Thread Zhou, Han
On Wed, Jul 27, 2016 at 7:15 AM, Russell Bryant  wrote:

>
>
> On Wed, Jul 27, 2016 at 5:58 AM, Kevin Benton  wrote:
>
>> > I'd like to see if we can solve the problems more generally.
>>
>> We've tried before but we very quickly run into competing requirements
>> with regards to eventual consistency. For example, asynchronous background
>> sync doesn't work if someone wants their backend to confirm that port
>> details are acceptable (e.g. mac isn't in use by some other system outside
>> of openstack). Then each backend has different methods for detecting what
>> is out of sync (e.g. config numbers, hashes, or just full syncs on startup)
>> that each come with their own requirements for how much data needs to be
>> resent when an inconsistency is detected.
>>
>> If we can come to some common ground of what is required by all of them,
>> then I would love to get some of this built into the ML2 framework.
>> However, we've discussed this at meetups/mid-cycles/summits and it
>> inevitably ends up with two people drawing furiously on a whiteboard,
>> someone crying in the corner, and everyone else arguing about the lack of
>> parametric polymorphism in Go.
>>
>
> ​Ha, yes, makes sense that this is really hard to solve in a way that
> works for everyone ...
> ​
>
>
>> Even between OVN and ODL in this thread, it sounds like the only thing in
>> common is a background worker that consumes from a queue of tasks in the
>> db. Maybe realistically the only common thing we can come up with is a
>> taskflow queue stored in the DB to solve the multiple workers issue...
>>
>
> ​To clarify, ODL has this background worker and the discussion was whether
> OVN should try to follow a similar approach.
>
> So far, my gut feeling is that it's far too complicated for the problems
> it would solve.  There's one identified multiple-worker related race
> condition on updates, but I think we can solve that another way.​
>
>
Russell, in fact I think this background worker is the good way to solve
both problems:

Problem 1. When something failed when updating OVN DB in post-commit: With
the help of background worker, it can do the retries and the job state can
be tracked, and with the information proper actions can be taken against
failure jobs, e.g. cleanups. It is basically a declarative way of
implementation, which IMHO is a particularly good way in ML2 context,
because we cannot just rollback Neutron DB changes at failure because it is
shared by all mech-drivers. (Even in a monolithic plugin, handling the
partial failures and doing rollback is a big headache).

Problem 2. Race condition because of lack of critical section between
Neutron DB transaction and post-commit: With the help of journal, the
ordering is ensured to be the same as DB transaction commits. Protection
against the journal processing between multiple background workers can be
properly enforced with the help of DB transaction.

I think ODL and OVN are not the only ones facing these problems. They are
pretty general to most drivers if not all. It would be great to have a
common task flow mechanism in ML2, but I'd like to try it in OVN first (if
no better solution to the problems above).


>
>
>> On Tue, Jul 26, 2016 at 11:31 AM, Russell Bryant 
>> wrote:
>>
>>>
>>>
>>> On Fri, Jul 22, 2016 at 7:51 AM, Numan Siddique 
>>> wrote:
>>>
 Thanks for the comments Amitabha.
 Please see comments inline

 On Fri, Jul 22, 2016 at 5:50 AM, Amitabha Biswas 
 wrote:

> Hi Numan,
>
> Thanks for the proposal. We have also been thinking about this
> use-case.
>
> If I’m reading this accurately (and I may not be), it seems that the
> proposal is to not have any OVN NB (CUD) operations (R operations outside
> the scope) done by the api_worker threads but rather by a new journal
> thread.
>
>
 Correct.
 ​


> If this is indeed the case, I’d like to consider the scenario when
> there any N neutron nodes, each node with M worker threads. The journal
> thread at the each node contain list of pending operations. Could there be
> (sequence) dependency in the pending operations amongst each the journal
> threads in the nodes that prevents them from getting applied (for e.g.
> Logical_Router_Port and Logical_Switch_Port inter-dependency), because we
> are returning success on neutron operations that have still not been
> committed to the NB DB.
>
>
 I
 ​ts a valid scenario and should be designed properly to handle such
 scenarios in case we take this approach.

>>>
>>> ​I believe a new table in the Neutron DB is used to synchronize all of
>>> the journal threads.
>>> ​
>>> Also note that OVN currently has no custom tables in the Neutron
>>> database and it would be *very* good to keep it that way if we can.
>>>
>>>

 ​

> Couple of clarifications and thoughts below.
>
> Thanks
> Amitabha 
>
> On Jul 13, 2016, at 1:20 A

Re: [openstack-dev] [Neutron] SFC stable/mitaka version

2016-07-27 Thread Ihar Hrachyshka

Tony Breeds  wrote:


On Wed, Jul 06, 2016 at 12:40:48PM +, Gary Kotton wrote:

Hi,
Is anyone looking at creating a stable/mitaka version? What if someone  
want

to use this for stable/mitaka?


If that's a thing you need it's a matter of Armando asking the release  
managers

to create it.


I only suggest Armando is not dragged into it, the release liaison  
(currently me) should be able to handle the request if it comes from the  
core team for the subproject.


Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Support for bay rollback may break magnum API backward compatibility

2016-07-27 Thread Hongbin Lu
Here is the guideline to evaluate an API change: 
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
 . In particular, I highlight the followings:

"""
The following types of changes are acceptable when conditionally added as a new 
API extension:
* Adding an optional property to a resource representation which may be 
supplied by clients, assuming the API previously would ignore this property.
* ...
The following types of changes are generally not considered acceptable:
* A change such that a request which was successful before now results in an 
error response
* Changing the semantics of a property in a resource representation which may 
be supplied by clients.
* ...
"""

Above all, as Ton mentioned, just adding a new option (--rollback) looks OK. 
However, the implementation should not break the existing behaviors. In 
particular, the proposed patch 
(https://review.openstack.org/#/c/343478/4/magnum/api/controllers/v1/bay.py) 
changes the request parameters and their types, which is considered to be 
unacceptable (unless bumping the microversion). To deal with that, I think 
there are two options:
1. Modify the proposed patch to make it backward-compatible. In particular, it 
should keep the existing properties as is (don't change their types and 
semantics). The new option should be optional and it should be ignored if 
clients are sending the old requests.
2. Keep the proposed patch as is, but bumping the microversion. You need to 
wait for this patch [1] to merge, and reference the microversion guide [1] to 
bump the version. In addition, it is highly recommended to follow the standard 
deprecation policy [2]. That means i) print a deprecated warning if old APIs 
are used, ii) document how to migrate from old APIs to new APIs, and iii) 
remove the old APIs after the deprecation period.

[1] 
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[2] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: July-27-16 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Support for bay rollback may break magnum 
API backward compatibility


Hi Wenzhi,
Looks like you are adding the new --rollback option to bay-update. If the user 
does not specify this new option,
then bay-update behaves the same as before; in other words, if it fails, then 
the state of the bay will be left
in the partially updated mode. Is this correct? If so, this does change the 
API, but does not seem to break
backward compatibility.
Ton Ngo,

[Inactive hide details for "Wenzhi Yu (yuywz)" ---07/27/2016 04:13:07 AM---Hi 
folks, I am working on a patch [1] to add bay roll]"Wenzhi Yu (yuywz)" 
---07/27/2016 04:13:07 AM---Hi folks, I am working on a patch [1] to add bay 
rollback machanism on update failure. But it seems

From: "Wenzhi Yu (yuywz)" mailto:wenzhi...@163.com>>
To: "openstack-dev" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/27/2016 04:13 AM
Subject: [openstack-dev] [magnum] Support for bay rollback may break magnum API 
backward compatibility





Hi folks,

I am working on a patch [1] to add bay rollback machanism on update failure. 
But it seems to break magnum API
backward compatibility.

I'm not sure how to deal with this, can you please give me your suggestion? 
Thanks!

[1]https://review.openstack.org/#/c/343478/

2016-07-27


Best Regards,
Wenzhi Yu 
(yuywz)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Pending removal of X-IO volume driver

2016-07-27 Thread Sean McGinnis
The Cinder policy for driver CI requires that all volume drivers
have a CI reporting on any new patchset. CI's may have some down
time, but if they do not report within a two week period they are
considered out of compliance with our policy.

This is a notification that the X-IO OpenStack CI is out of compliance.
It has not reported since March 18th, 2016.

The patch for driver removal has been posted here:

https://review.openstack.org/348022

If this CI is not brought into compliance, the patch to remove the
driver will be approved one week from now.

Thanks,
Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SFC stable/mitaka version

2016-07-27 Thread Tony Breeds
On Wed, Jul 27, 2016 at 10:23:30PM +0200, Ihar Hrachyshka wrote:

> I only suggest Armando is not dragged into it, the release liaison
> (currently me) should be able to handle the request if it comes from the
> core team for the subproject.

Good point.  I defaulted to PTL but you're right the release liason is also
totally reasonable.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Support for bay rollback may break magnum API backward compatibility

2016-07-27 Thread Adrian Otto

On Jul 27, 2016, at 1:26 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

Here is the guideline to evaluate an API change: 
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
 . In particular, I highlight the followings:

"""
The following types of changes are acceptable when conditionally added as a new 
API extension:
* Adding an optional property to a resource representation which may be 
supplied by clients, assuming the API previously would ignore this property.
* …
The following types of changes are generally not considered acceptable:
* A change such that a request which was successful before now results in an 
error response
* Changing the semantics of a property in a resource representation which may 
be supplied by clients.
* …
"""

Above all, as Ton mentioned, just adding a new option (--rollback) looks OK. 
However, the implementation should not break the existing behaviors. In 
particular, the proposed patch 
(https://review.openstack.org/#/c/343478/4/magnum/api/controllers/v1/bay.py) 
changes the request parameters and their types, which is considered to be 
unacceptable (unless bumping the microversion). To deal with that, I think 
there are two options:
1. Modify the proposed patch to make it backward-compatible. In particular, it 
should keep the existing properties as is (don’t change their types and 
semantics). The new option should be optional and it should be ignored if 
clients are sending the old requests.

Use the #1 approach above, please.

2. Keep the proposed patch as is, but bumping the microversion. You need to 
wait for this patch [1] to merge, and reference the microversion guide [1] to 
bump the version. In addition, it is highly recommended to follow the standard 
deprecation policy [2]. That means i) print a deprecated warning if old APIs 
are used, ii) document how to migrate from old APIs to new APIs, and iii) 
remove the old APIs after the deprecation period.

You can do this as well, but please don’t consider this an OR choice.

Adrian


[1] 
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[2] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: July-27-16 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Support for bay rollback may break magnum 
API backward compatibility


Hi Wenzhi,
Looks like you are adding the new --rollback option to bay-update. If the user 
does not specify this new option,
then bay-update behaves the same as before; in other words, if it fails, then 
the state of the bay will be left
in the partially updated mode. Is this correct? If so, this does change the 
API, but does not seem to break
backward compatibility.
Ton Ngo,

"Wenzhi Yu (yuywz)" ---07/27/2016 04:13:07 AM---Hi folks, I am 
working on a patch [1] to add bay rollback machanism on update failure. But it 
seems

From: "Wenzhi Yu (yuywz)" mailto:wenzhi...@163.com>>
To: "openstack-dev" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/27/2016 04:13 AM
Subject: [openstack-dev] [magnum] Support for bay rollback may break magnum API 
backward compatibility





Hi folks,

I am working on a patch [1] to add bay rollback machanism on update failure. 
But it seems to break magnum API
backward compatibility.

I'm not sure how to deal with this, can you please give me your suggestion? 
Thanks!

[1]https://review.openstack.org/#/c/343478/

2016-07-27


Best Regards,
Wenzhi Yu 
(yuywz)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Support for bay rollback may break magnum API backward compatibility

2016-07-27 Thread Adrian Otto

On Jul 27, 2016, at 1:26 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

Here is the guideline to evaluate an API change: 
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
 . In particular, I highlight the followings:

"""
The following types of changes are acceptable when conditionally added as a new 
API extension:
* Adding an optional property to a resource representation which may be 
supplied by clients, assuming the API previously would ignore this property.
* …
The following types of changes are generally not considered acceptable:
* A change such that a request which was successful before now results in an 
error response
* Changing the semantics of a property in a resource representation which may 
be supplied by clients.
* …
"""

Above all, as Ton mentioned, just adding a new option (--rollback) looks OK. 
However, the implementation should not break the existing behaviors. In 
particular, the proposed patch 
(https://review.openstack.org/#/c/343478/4/magnum/api/controllers/v1/bay.py) 
changes the request parameters and their types, which is considered to be 
unacceptable (unless bumping the microversion). To deal with that, I think 
there are two options:
1. Modify the proposed patch to make it backward-compatible. In particular, it 
should keep the existing properties as is (don’t change their types and 
semantics). The new option should be optional and it should be ignored if 
clients are sending the old requests.

Use the #1 approach above, please.

2. Keep the proposed patch as is, but bumping the microversion. You need to 
wait for this patch [1] to merge, and reference the microversion guide [1] to 
bump the version. In addition, it is highly recommended to follow the standard 
deprecation policy [2]. That means i) print a deprecated warning if old APIs 
are used, ii) document how to migrate from old APIs to new APIs, and iii) 
remove the old APIs after the deprecation period.

You can do this as well, but please don’t consider this an OR choice.

Adrian


[1] 
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[2] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: July-27-16 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Support for bay rollback may break magnum 
API backward compatibility


Hi Wenzhi,
Looks like you are adding the new --rollback option to bay-update. If the user 
does not specify this new option,
then bay-update behaves the same as before; in other words, if it fails, then 
the state of the bay will be left
in the partially updated mode. Is this correct? If so, this does change the 
API, but does not seem to break
backward compatibility.
Ton Ngo,

"Wenzhi Yu (yuywz)" ---07/27/2016 04:13:07 AM---Hi folks, I am 
working on a patch [1] to add bay rollback machanism on update failure. But it 
seems

From: "Wenzhi Yu (yuywz)" mailto:wenzhi...@163.com>>
To: "openstack-dev" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/27/2016 04:13 AM
Subject: [openstack-dev] [magnum] Support for bay rollback may break magnum API 
backward compatibility





Hi folks,

I am working on a patch [1] to add bay rollback machanism on update failure. 
But it seems to break magnum API
backward compatibility.

I'm not sure how to deal with this, can you please give me your suggestion? 
Thanks!

[1]https://review.openstack.org/#/c/343478/

2016-07-27


Best Regards,
Wenzhi Yu 
(yuywz)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Pending removal of Tintri volume driver

2016-07-27 Thread Sean McGinnis
The Cinder policy for driver CI requires that all volume drivers
have a CI reporting on any new patchset. CI's may have some down
time, but if they do not report within a two week period they are
considered out of compliance with our policy.

This is a notification that the Tintri OpenStack CI is out of compliance.
It has not reported since June 9th, 2016.

The patch for driver removal has been posted here:

https://review.openstack.org/348026/

If this CI is not brought into compliance, the patch to remove the
driver will be approved one week from now.

Thanks,
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >