Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-12 Thread Yolanda Robla Mota
Hi Saravanan
Thanks for your comments. With this new module, I guess a reboot is still
needed after os-host-config ?
Right now we have been guided by TripleO and Ironic people to start using
what in Ironic is called "custom deployment steps". An initial spec is
reflected here:
https://review.openstack.org/#/c/382091

The idea will be to define custom deployment steps for ironic, like
including the kernel boot parameters. Can that be a solution for your
"tuned" needs as well?

Best
Yolanda

On Tue, Dec 13, 2016 at 7:59 AM, Saravanan KR  wrote:

> Hello,
>
> Thanks Yolanda for starting the thread. The list of requirements in
> the host configuration, related to boot parameters and reboot are:
>
> * DPDK - For vfio-pci driver binding, iommu support on kernel args is
> mandatory, which has to be configured before os-net-config runs
> * DPDK & RealTime - Enabling "tuned" profile for nfv or rt, will
> update the boot parameters and a reboot is required
> * Other items mentioned by Yolanda
>
> If it is configuring only, the boot parameters, then ironic's deploy
> feature may help, but there are other requirement to enable the
> "tuned" profile which tunes the host for the required configuration,
> which also requires reboot, as it will alter the boot parameters. If
> we can collate the all the configurations which requires reboot
> together, we will improve the reboot time. And if we reboot before the
> actual openstack services are started, then the reboot time _may_
> improve.
>
> Can I propose a *new* module for TripleO deployments, like >
> os-host-config <, which will run after os-collect-config and before
> os-net-config, then we can collate all the host specific configuration
> inside this module. This module can be a set of ansible scripts, which
> will only configure the host. Ofcource the parameter to this module
> should be provided via os-collect-config. Separating the host
> configuration will help in the containerized TripleO deployment also.
>
> Or any other better alternatives are welcome.
>
> Please pour in your views if you think for/against it.
>
> Regards,
> Saravanan KR
>
>
> On Fri, Dec 2, 2016 at 9:31 PM, Yolanda Robla Mota 
> wrote:
> > Hi , Dmitry
> > That's what i didn't get very clear. If all the deployment steps are
> pre-imaging as that statement says, or every deploy step could be isolated
> and configured somehow.
> > I'm also a bit confused with that spec, because it mixes the concept of
> "deployment steps", will all the changes needed for runtime RAID. Could it
> be possible to separate into two separate ones?
> >
> > - Original Message -
> > From: "Dmitry Tantsur" 
> > To: openstack-dev@lists.openstack.org
> > Sent: Friday, December 2, 2016 3:51:30 PM
> > Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel
> parameters on local boot
> >
> > On 12/02/2016 01:28 PM, Yolanda Robla Mota wrote:
> >> Hi Dmitry
> >>
> >> So we've been looking at that spec you suggested, but we are wondering
> if that will be useful for our use case. As the text says:
> >>
> >> The ``ironic-python-agent`` project and ``agent`` driver will be
> adjusted to
> >> support ``get_deploy_steps``. That way, ``ironic-python-agent`` will be
> able
> >> to declare deploy steps to run prior to disk imaging, and operators
> will be
> >> able to extend ``ironic-python-agent`` to add any custom step.
> >>
> >> Our needs are different, actually we need to create a deployment step
> after imaging. We'd need an step that drops config on /etc/default/grub ,
> and updates it. This is a post-imaging deploy step, that modifies the base
> image. Could ironic support these kind of steps, if there is a base system
> to just define per-user steps?
> >
> > I thought that all deployment operations are converted to steps, with
> > partitioning, writing the image, writing the configdrive and installing
> the boot
> > loader being four default ones (as you see, two steps actually happen
> after the
> > image is written).
> >
> >>
> >> The idea we had on mind is:
> >> - from tripleo, add a property to each flavor, that defines the boot
> parameters:  openstack flavor set compute --property
> os:kernel_boot_params='abc'
> >> - define a "ironic post-imaging deploy step", that will grab this
> property from the flavor, drop it on /etc/default/grub and regenerate it
> >> - then on local boot, the proper kernel parameters will be applied
> >>
> >> What is your feedback there?
> >>
> >> - Original Message -
> >> From: "Dmitry Tantsur" 
> >> To: openstack-dev@lists.openstack.org
> >> Sent: Friday, December 2, 2016 12:44:29 PM
> >> Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel
> parameters on local boot
> >>
> >> On 11/28/2016 04:46 PM, Jay Faulkner wrote:
> >>>
>  On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota 
> wrote:
> 
>  Hi, good afternoon
> 
>  I wanted 

[openstack-dev] [all][infra] Python3 change in OpenStack CI

2016-12-12 Thread Andreas Jaeger
As part of the move to run master and stable/newton on Xenial, we
disabled testing python 3.4 on these newer branches.

Now, for testing we have:
* python3.4 on older branches (stable/liberty and stable/mitaka) - on a
trusty VM
* python3.5 on newer branches (stable/newton, master and future stable
branches)

In the past Clark had added python 3.5 jobs to all repos that had python
3.4 jobs as non-voting and asked to make them voting. Some projects have
still not done this.

I encourage every project to:
1) Make python 3.5 testing voting if not done already.
2) Add python 3.5 testing if there's no testing currently
3) Remove python 3.4 jobs if they have no older branches (only master)

Please review your existing jobs and send changes to project-config repo,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] pending review

2016-12-12 Thread Huang Zhiteng
Hi Sam,

Thank you for reporting the bug and trying to fix it.  Sorry that you
change sit there for such long time.  I've left some comment in your patch.

On Tue, Dec 13, 2016 at 12:33 PM, Sam Morrison  wrote:

> Hi Cinder devs,
>
> I’ve had a review [1] waiting for some eyes for over a month now. What’s
> the process here, usually I get a response to a review in other projects in
> a day or two.
> Is there someone I need to alert or add to the review specifically for
> cinder patches?
>
> Thanks,
> Sam
>
> [1] https://review.openstack.org/#/c/393092/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards
Huang Zhiteng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-12 Thread Saravanan KR
Hello,

Thanks Yolanda for starting the thread. The list of requirements in
the host configuration, related to boot parameters and reboot are:

* DPDK - For vfio-pci driver binding, iommu support on kernel args is
mandatory, which has to be configured before os-net-config runs
* DPDK & RealTime - Enabling "tuned" profile for nfv or rt, will
update the boot parameters and a reboot is required
* Other items mentioned by Yolanda

If it is configuring only, the boot parameters, then ironic's deploy
feature may help, but there are other requirement to enable the
"tuned" profile which tunes the host for the required configuration,
which also requires reboot, as it will alter the boot parameters. If
we can collate the all the configurations which requires reboot
together, we will improve the reboot time. And if we reboot before the
actual openstack services are started, then the reboot time _may_
improve.

Can I propose a *new* module for TripleO deployments, like >
os-host-config <, which will run after os-collect-config and before
os-net-config, then we can collate all the host specific configuration
inside this module. This module can be a set of ansible scripts, which
will only configure the host. Ofcource the parameter to this module
should be provided via os-collect-config. Separating the host
configuration will help in the containerized TripleO deployment also.

Or any other better alternatives are welcome.

Please pour in your views if you think for/against it.

Regards,
Saravanan KR


On Fri, Dec 2, 2016 at 9:31 PM, Yolanda Robla Mota  wrote:
> Hi , Dmitry
> That's what i didn't get very clear. If all the deployment steps are 
> pre-imaging as that statement says, or every deploy step could be isolated 
> and configured somehow.
> I'm also a bit confused with that spec, because it mixes the concept of 
> "deployment steps", will all the changes needed for runtime RAID. Could it be 
> possible to separate into two separate ones?
>
> - Original Message -
> From: "Dmitry Tantsur" 
> To: openstack-dev@lists.openstack.org
> Sent: Friday, December 2, 2016 3:51:30 PM
> Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
> parameters on local boot
>
> On 12/02/2016 01:28 PM, Yolanda Robla Mota wrote:
>> Hi Dmitry
>>
>> So we've been looking at that spec you suggested, but we are wondering if 
>> that will be useful for our use case. As the text says:
>>
>> The ``ironic-python-agent`` project and ``agent`` driver will be adjusted to
>> support ``get_deploy_steps``. That way, ``ironic-python-agent`` will be able
>> to declare deploy steps to run prior to disk imaging, and operators will be
>> able to extend ``ironic-python-agent`` to add any custom step.
>>
>> Our needs are different, actually we need to create a deployment step after 
>> imaging. We'd need an step that drops config on /etc/default/grub , and 
>> updates it. This is a post-imaging deploy step, that modifies the base 
>> image. Could ironic support these kind of steps, if there is a base system 
>> to just define per-user steps?
>
> I thought that all deployment operations are converted to steps, with
> partitioning, writing the image, writing the configdrive and installing the 
> boot
> loader being four default ones (as you see, two steps actually happen after 
> the
> image is written).
>
>>
>> The idea we had on mind is:
>> - from tripleo, add a property to each flavor, that defines the boot 
>> parameters:  openstack flavor set compute --property 
>> os:kernel_boot_params='abc'
>> - define a "ironic post-imaging deploy step", that will grab this property 
>> from the flavor, drop it on /etc/default/grub and regenerate it
>> - then on local boot, the proper kernel parameters will be applied
>>
>> What is your feedback there?
>>
>> - Original Message -
>> From: "Dmitry Tantsur" 
>> To: openstack-dev@lists.openstack.org
>> Sent: Friday, December 2, 2016 12:44:29 PM
>> Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
>> parameters on local boot
>>
>> On 11/28/2016 04:46 PM, Jay Faulkner wrote:
>>>
 On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota  
 wrote:

 Hi, good afternoon

 I wanted to start an email thread about how to properly setup kernel 
 parameters on local boot, for our overcloud images on TripleO.
 These parameters may vary depending on the needs of our end users, and 
 even can be different ( for different roles ) per deployment. As an 
 example, we need it for:
 - enable FIPS kernel in terms of security 
 (https://bugs.launchpad.net/tripleo/+bug/1640235)
 - enable functionality for DPDK/SR-IOV 
 (https://review.openstack.org/#/c/331564/)
 - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
 - etc..

 So far, the solutions we got were on several directions:

 1. Update the golden overcloud-full image with 

Re: [openstack-dev] [vitrage] how to use mock driver

2016-12-12 Thread Yujun Zhang
Elisha, thanks for the explanation. The difference is clear to me now.

If I understand it correctly, the regular expression in spec JSON is for
information only. It is never compiled into a `re` object.

The actual values are generated in `static_info_parsers` from the
`mapping`. The regular expression is neither used as a value template nor
for value validation.

Is that right?

On Mon, Dec 12, 2016 at 8:47 PM Rosensweig, Elisha (Nokia - IL) <
elisha.rosensw...@nokia.com> wrote:

> Hi,
>
>
>
> · In Vitrage Datasources, we can have a different input format
> for snapshots and updates. Thus, we need a different JSON file for each.
>
> · Also, as part of the Mock feature, we need to support (for each
> resource) things that will be static, such as it’s name, and things that
> change over time, such as timestamps. We support this partially via
> different JSON files. In general, the dynamic file (marked with “D”)
> *overwrites* the static one (marked with “S”).
>
> · In the code you can further inject specific fields you want to
> have for a specific test, in addition to the JSON files. See examples in
> test_scenario_evaluator.py.
>
>
>
> Elisha
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Monday, December 12, 2016 8:23 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [vitrage] how to use mock driver
>
>
>
> Is there any documentation on how to use mock driver for unit testing?
>
>
>
> It seems it generates fake events from json spec but what is the different
> between
>
>
>
> - `xxx_snapshot_X.json` and `xxx_dynamic_X.json`
>
> - `xxx_S` and `xxx_D`
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] Video Meetings - input requested

2016-12-12 Thread Jeffrey Zhang
I do not want to kill the ad hoc video meeting. But we should keep a certain
degree of openness. what i would like to see is an invitation in email or
irc
channel. some agenda which tell others want will be talked. And some
decision
made in the video meeting should be record in some way.

on eth other hand, Kolla project and its team grows. We have kolla,
kolla-ansible
and kolla-k8s projects now. some time 1 hour may be very tight, especially
for
kolla-k8s project, which is always talked at last and may have no much
time. I
want to make some change for this in the next kolla meeting.

the 1 hour meeting time will be split into 4 part 10-10-20-20, 10 min for
announcement, 10 min for kolla, 20 min for kolla-ansible and 20 min for
kolla-kubernetes.
So if u want talk something in the meeting, feel free to add it in agenda
list
before the meeting[0]

[0] https://wiki.openstack.org/wiki/Meetings/Kolla



On Tue, Dec 13, 2016 at 1:51 PM, Swapnil Kulkarni 
wrote:

>
>
> On Dec 13, 2016 8:44 AM, "Michał Jastrzębski"  wrote:
>
> I think video meetings Jeffrey is referring to are just that- quickly as
> hoc way to resolve some technical dispute or implementation. We did that
> few times to quickly work out kolla k8s issues. Hangouts are much more
> efficient way of discussions like that.
>
> My take on the issue is that we should use all tools available. Scheduling
> of such meetings would defeat their purpose of resolving things *quickly*.
> If something requires scheduling it should be done on our weekly meeting.
>
> Video meetings are thing in Kolla k8s for one more reason- we want to move
> fast and scheduling meeting with proper heads up for every technical
> dispute (couple of these every week) would seriously impede our ability to
> deliver. Ocata is our goal!
>
> Tldr; ad hoc video meetings are good for quickly paced dev like in Kolla
> k8s imho
>
> Cheers
> Michał
>
> On Dec 12, 2016 12:36 PM, "Ed Leafe"  wrote:
>
> On Dec 12, 2016, at 11:16 AM, Jeffrey Zhang 
> wrote:
>
> > Some contributors in kolla have had unscheduled video meetings. This has
> > resulted in complaints about inclusiveness. Some contributors can’t even
> make
> > the meeting we have, and another scheduled video meeting might produce a
> > situation in which there is no any record of decisions made during the
> video
> > meeting. At least with IRC meetings there is always a log.
>
> Occasionally a quick Google hangout is necessary in Nova in order to
> quickly settle an outstanding issue so we can continue to make progress.
> When that happens, the link is posted in the #openstack-nova channel, and
> anyone who is interested can join. So while it’s not logged like an IRC
> meeting, it’s no excluding anyone, and we can quickly remove roadblocks
> that are harder to do in IRC.
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> I think there is a mutual understanding where we have informal/ ad-hoc/ on
> the fly meetings to discuss important things, like after design sessions in
> the corridor, during lunch/dinner, etc etc. Video calls or hangouts is just
> digital extension of it.
>
> Only recommendation I have is a digest either on etherpad or mailing list
> where people who missed can get required details.
>
> Swapnil
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] Video Meetings - input requested

2016-12-12 Thread Swapnil Kulkarni
On Dec 13, 2016 8:44 AM, "Michał Jastrzębski"  wrote:

I think video meetings Jeffrey is referring to are just that- quickly as
hoc way to resolve some technical dispute or implementation. We did that
few times to quickly work out kolla k8s issues. Hangouts are much more
efficient way of discussions like that.

My take on the issue is that we should use all tools available. Scheduling
of such meetings would defeat their purpose of resolving things *quickly*.
If something requires scheduling it should be done on our weekly meeting.

Video meetings are thing in Kolla k8s for one more reason- we want to move
fast and scheduling meeting with proper heads up for every technical
dispute (couple of these every week) would seriously impede our ability to
deliver. Ocata is our goal!

Tldr; ad hoc video meetings are good for quickly paced dev like in Kolla
k8s imho

Cheers
Michał

On Dec 12, 2016 12:36 PM, "Ed Leafe"  wrote:

On Dec 12, 2016, at 11:16 AM, Jeffrey Zhang  wrote:

> Some contributors in kolla have had unscheduled video meetings. This has
> resulted in complaints about inclusiveness. Some contributors can’t even
make
> the meeting we have, and another scheduled video meeting might produce a
> situation in which there is no any record of decisions made during the
video
> meeting. At least with IRC meetings there is always a log.

Occasionally a quick Google hangout is necessary in Nova in order to
quickly settle an outstanding issue so we can continue to make progress.
When that happens, the link is posted in the #openstack-nova channel, and
anyone who is interested can join. So while it’s not logged like an IRC
meeting, it’s no excluding anyone, and we can quickly remove roadblocks
that are harder to do in IRC.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I think there is a mutual understanding where we have informal/ ad-hoc/ on
the fly meetings to discuss important things, like after design sessions in
the corridor, during lunch/dinner, etc etc. Video calls or hangouts is just
digital extension of it.

Only recommendation I have is a digest either on etherpad or mailing list
where people who missed can get required details.

Swapnil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cannot ping or ssh to floating ip assigned to instance [neutron][floating ip]

2016-12-12 Thread Noam Angel
Have you opened ssh/icmp security groups?

From: Shanker Gudipati [mailto:shanker.gudip...@tcs.com]
Sent: Tuesday, December 13, 2016 7:24 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Cannot ping or ssh to floating ip assigned to instance 
[neutron][floating ip]


Hi all,

I have devstack setup which of newton version. (Lab setup)

Issue : cannot ping or ssh to floating ip assigned to instance(Security groups 
are allowed).

172.16.73.0/24 is the external network or lab network.

neutron net-list
+--+--+-+
| id   | name | subnets 
|
+--+--+-+
| ccdb22fe-8bae-4378-9b47-82c04a16186e | ext-net  | 
af84a87f-ce6f-4da3-a6bb-5238e97cabd4 172.16.73.0/24 |
| e5999086-9fb1-403b-9273-7bb218ceebe8 | demo-net | 
734e5660-807b-4038-9a86-096889f5d188 10.10.1.0/24   |
+--+--+-+

neutron net-show e5999086-9fb1-403b-9273-7bb218ceebe8
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones| nova |
| created_at| 2016-12-12T13:16:49Z |
| description   |  |
| id| e5999086-9fb1-403b-9273-7bb218ceebe8 |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1450 |
| name  | demo-net |
| port_security_enabled | True |
| project_id| 03959ecbd383459eaf5d5389ab4372ac |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 61   |
| revision_number   | 5|
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | 734e5660-807b-4038-9a86-096889f5d188 |
| tags  |  |
| tenant_id | 03959ecbd383459eaf5d5389ab4372ac |
| updated_at| 2016-12-12T13:17:31Z |
+---+--+


neutron net-list
+--+--+-+
| id   | name | subnets 
|
+--+--+-+
| ccdb22fe-8bae-4378-9b47-82c04a16186e | ext-net  | 
af84a87f-ce6f-4da3-a6bb-5238e97cabd4 172.16.73.0/24 |
| e5999086-9fb1-403b-9273-7bb218ceebe8 | demo-net | 
734e5660-807b-4038-9a86-096889f5d188 10.10.1.0/24   |
+--+--+-+
ubuntu@ubuntu-HP-Compaq-Elite-8300-SFF:/etc/neutron/plugins/ml2$ neutron 
router-list
+--+-+-+-+---+
| id   | name| external_gateway_info
   | distributed | ha|
+--+-+-+-+---+
| ce901ef7-60cd-4d88-828a-3bb5a7e3c9d5 | demo-router | {"network_id": 
"ccdb22fe-   | False   | False |
|  | | 
8bae-4378-9b47-82c04a16186e", "enable_snat": true,  | |   |
|  | | "external_fixed_ips": 
[{"subnet_id": "af84a87f- | |   |
|  | | 
ce6f-4da3-a6bb-5238e97cabd4", "ip_address": | |   |
|  | | "172.16.73.247"}]}   
   | |   |

[openstack-dev] Cannot ping or ssh to floating ip assigned to instance [neutron][floating ip]

2016-12-12 Thread Shanker Gudipati

Hi all, 

I have devstack setup which of newton version. (Lab setup)

Issue : cannot ping or ssh to floating ip assigned to instance(Security groups 
are allowed). 

172.16.73.0/24 is the external network or lab network.

neutron net-list
+--+--+-+
| id   | name | subnets 
    |
+--+--+-+
| ccdb22fe-8bae-4378-9b47-82c04a16186e | ext-net  | 
af84a87f-ce6f-4da3-a6bb-5238e97cabd4 172.16.73.0/24 |
| e5999086-9fb1-403b-9273-7bb218ceebe8 | demo-net | 
734e5660-807b-4038-9a86-096889f5d188 10.10.1.0/24   |
+--+--+-+

neutron net-show e5999086-9fb1-403b-9273-7bb218ceebe8
+---+--+
| Field | Value    |
+---+--+
| admin_state_up    | True |
| availability_zone_hints   |  |
| availability_zones    | nova |
| created_at    | 2016-12-12T13:16:49Z |
| description   |  |
| id    | e5999086-9fb1-403b-9273-7bb218ceebe8 |
| ipv4_address_scope    |  |
| ipv6_address_scope    |  |
| mtu   | 1450 |
| name  | demo-net |
| port_security_enabled | True |
| project_id    | 03959ecbd383459eaf5d5389ab4372ac |
| provider:network_type | vxlan    |
| provider:physical_network |  |
| provider:segmentation_id  | 61   |
| revision_number   | 5    |
| router:external   | False    |
| shared    | False    |
| status    | ACTIVE   |
| subnets   | 734e5660-807b-4038-9a86-096889f5d188 |
| tags  |  |
| tenant_id | 03959ecbd383459eaf5d5389ab4372ac |
| updated_at    | 2016-12-12T13:17:31Z |
+---+--+


neutron net-list 
+--+--+-+
| id   | name | subnets 
    |
+--+--+-+
| ccdb22fe-8bae-4378-9b47-82c04a16186e | ext-net  | 
af84a87f-ce6f-4da3-a6bb-5238e97cabd4 172.16.73.0/24 |
| e5999086-9fb1-403b-9273-7bb218ceebe8 | demo-net | 
734e5660-807b-4038-9a86-096889f5d188 10.10.1.0/24   |
+--+--+-+
ubuntu@ubuntu-HP-Compaq-Elite-8300-SFF:/etc/neutron/plugins/ml2$ neutron 
router-list 
+--+-+-+-+---+
| id   | name    | external_gateway_info
   | distributed | ha    |
+--+-+-+-+---+
| ce901ef7-60cd-4d88-828a-3bb5a7e3c9d5 | demo-router | {"network_id": 
"ccdb22fe-   | False   | False |
|  | | 
8bae-4378-9b47-82c04a16186e", "enable_snat": true,  | |   |
|  | | "external_fixed_ips": 
[{"subnet_id": "af84a87f- | |   |
|  | | 
ce6f-4da3-a6bb-5238e97cabd4", "ip_address": | |   |
|  | | "172.16.73.247"}]}   
   | |   |
+--+-+-+-+---+


nova floating-ip-list 
WARNING: Command floating-ip-list is deprecated and will be removed after Nova 
15.0.0 is released. Use python-neutronclient or python-openstackclient instead.

[openstack-dev] [Horizon] Meeting at 20:00UTC this Wednesday, 14th December

2016-12-12 Thread Richard Jones
Hi folks,

The Horizon team will be having our next meeting at 20:00 UTC this
Wednesday, 14th December in #openstack-meeting-3

Meeting agenda is here: https://wiki.openstack.org/wiki/Meetings/Horizon

If we have spare time this meeting I think we should look into getting some
patches reviewed together.

Anyone is welcome to to add agenda items and everyone interested in
Horizon is encouraged to attend.

Since I'm going to be on vacation next week, I won't be holding a team
meeting. If someone else would like to step up to chair it, that'd be nice :-)


Cheers,

Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New comer to Openstack.

2016-12-12 Thread akula nagaraj reddy
Hi All Thanks for the inputs.












*Thanks and Regards*

* Nagaraj R*
+919538750652

On Tue, Dec 13, 2016 at 6:39 AM, joehuang  wrote:

> Hello, Akula,
>
> You are welcome to join Tircircle project, please join us if you are
> interested in it: https://wiki.openstack.org/wiki/Tricircle
>
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Gary Kotton [gkot...@vmware.com]
> *Sent:* 11 December 2016 16:17
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] New comer to Openstack.
>
> Welcome aboard. Please see http://www.openstack.org/. A lot of the
> projects have developer guidelines. Maybe start there…
>
>
>
> *From: *akula nagaraj reddy 
> *Reply-To: *OpenStack List 
> *Date: *Sunday, December 11, 2016 at 8:36 AM
> *To: *OpenStack List 
> *Subject: *[openstack-dev] New comer to Openstack.
>
>
>
> Hi All,
>
>
>
> Please let someone give how to start the things. I have moderate
> experience in Python .
>
>
>
>
>
>
>
> *Thanks and Regards*
>
> * Nagaraj R*
>
> +919538750652
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] pending review

2016-12-12 Thread Sam Morrison
Hi Cinder devs,

I’ve had a review [1] waiting for some eyes for over a month now. What’s the 
process here, usually I get a response to a review in other projects in a day 
or two. 
Is there someone I need to alert or add to the review specifically for cinder 
patches?

Thanks,
Sam

[1] https://review.openstack.org/#/c/393092/ 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] Video Meetings - input requested

2016-12-12 Thread Steven Dake (stdake)
The issue raised is they violate the 4 opens.


From: Michał Jastrzębski 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, December 12, 2016 at 8:09 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla][tc] Video Meetings - input requested

I think video meetings Jeffrey is referring to are just that- quickly as hoc 
way to resolve some technical dispute or implementation. We did that few times 
to quickly work out kolla k8s issues. Hangouts are much more efficient way of 
discussions like that.

My take on the issue is that we should use all tools available. Scheduling of 
such meetings would defeat their purpose of resolving things *quickly*. If 
something requires scheduling it should be done on our weekly meeting.

Video meetings are thing in Kolla k8s for one more reason- we want to move fast 
and scheduling meeting with proper heads up for every technical dispute (couple 
of these every week) would seriously impede our ability to deliver. Ocata is 
our goal!

Tldr; ad hoc video meetings are good for quickly paced dev like in Kolla k8s 
imho

Cheers
Michał

On Dec 12, 2016 12:36 PM, "Ed Leafe" > 
wrote:
On Dec 12, 2016, at 11:16 AM, Jeffrey Zhang 
> wrote:

> Some contributors in kolla have had unscheduled video meetings. This has
> resulted in complaints about inclusiveness. Some contributors can’t even make
> the meeting we have, and another scheduled video meeting might produce a
> situation in which there is no any record of decisions made during the video
> meeting. At least with IRC meetings there is always a log.
Occasionally a quick Google hangout is necessary in Nova in order to quickly 
settle an outstanding issue so we can continue to make progress. When that 
happens, the link is posted in the #openstack-nova channel, and anyone who is 
interested can join. So while it’s not logged like an IRC meeting, it’s no 
excluding anyone, and we can quickly remove roadblocks that are harder to do in 
IRC.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New comer to Openstack.

2016-12-12 Thread yatin kumbhare
Hello Akula,

Start with http://www.openstack.org/software/start/

Get familiar with Openstack API and Environment/Services.

Regards,
Yatin

On Tue, Dec 13, 2016 at 6:39 AM, joehuang  wrote:

> Hello, Akula,
>
> You are welcome to join Tircircle project, please join us if you are
> interested in it: https://wiki.openstack.org/wiki/Tricircle
>
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Gary Kotton [gkot...@vmware.com]
> *Sent:* 11 December 2016 16:17
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] New comer to Openstack.
>
> Welcome aboard. Please see http://www.openstack.org/. A lot of the
> projects have developer guidelines. Maybe start there…
>
>
>
> *From: *akula nagaraj reddy 
> *Reply-To: *OpenStack List 
> *Date: *Sunday, December 11, 2016 at 8:36 AM
> *To: *OpenStack List 
> *Subject: *[openstack-dev] New comer to Openstack.
>
>
>
> Hi All,
>
>
>
> Please let someone give how to start the things. I have moderate
> experience in Python .
>
>
>
>
>
>
>
> *Thanks and Regards*
>
> * Nagaraj R*
>
> +919538750652 <+91%2095387%2050652>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] No team meeting today - Dec 12th

2016-12-12 Thread Renat Akhmerov
Dougal, thanks.

I’m working on https://bugs.launchpad.net/mistral/+bug/1640378 
, hoping to get it done this 
week. It is now a critical thing to fix.

Renat Akhmerov
@Nokia

> On 12 Dec 2016, at 21:50, Dougal Matthews  wrote:
> 
> Hi all,
> 
> Todays meeting is cancelled as none of the Mistral team are available to 
> attend. If you have anything you would have liked to discuss please raise it 
> in a reply to this email.
> 
> As for me, I just have a couple of small patches that need second reviews:
> 
> https://review.openstack.org/408617 
> https://review.openstack.org/408515 
> https://review.openstack.org/407454 
> 
> Thanks,
> Dougal
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [ceilometer]

2016-12-12 Thread Li, Xiaoyan
The new notification reports kinds of capacity information, includes total, 
free, allocated, provisioned, visual_free. 
Detailed info please see the following specs: 

https://review.openstack.org/#/c/192542/2/specs/liberty/cinder-capacity-notifications.rst
https://review.openstack.org/#/c/249932/1/specs/mitaka/capacity-headroom.rst
https://review.openstack.org/#/c/206923/

Lisa

-Original Message-
From: Julien Danjou [mailto:jul...@danjou.info] 
Sent: Thursday, December 8, 2016 5:42 PM
To: Jiong Liu 
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] [ceilometer]

On Thu, Dec 08 2016, Jiong Liu wrote:

Hi Jeremy,

If I'm not mistaken, there's a new pollster in Ocata that uses the Cinder API 
to get the metrics now, so I think the volume usage audit is getting useless.

> Hello Cinder/Ceilometer community,
>
>  
>
> Is there any guideline on using `cinder-volume-usage-audit` command?
>
>  
>
> Search through cinder/ceilometer logs, I find some messages are sent 
> to ceilometer-collector.
>
> How do I check the output of this command in ceilometer? Do you have 
> any suggestion?
>
>  
>
> Looking forward to your comments. Thanks.
>
>  
>
> BR,
>
> Jeremy Liu
>
>
>

--
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] Video Meetings - input requested

2016-12-12 Thread Michał Jastrzębski
I think video meetings Jeffrey is referring to are just that- quickly as
hoc way to resolve some technical dispute or implementation. We did that
few times to quickly work out kolla k8s issues. Hangouts are much more
efficient way of discussions like that.

My take on the issue is that we should use all tools available. Scheduling
of such meetings would defeat their purpose of resolving things *quickly*.
If something requires scheduling it should be done on our weekly meeting.

Video meetings are thing in Kolla k8s for one more reason- we want to move
fast and scheduling meeting with proper heads up for every technical
dispute (couple of these every week) would seriously impede our ability to
deliver. Ocata is our goal!

Tldr; ad hoc video meetings are good for quickly paced dev like in Kolla
k8s imho

Cheers
Michał

On Dec 12, 2016 12:36 PM, "Ed Leafe"  wrote:

On Dec 12, 2016, at 11:16 AM, Jeffrey Zhang  wrote:

> Some contributors in kolla have had unscheduled video meetings. This has
> resulted in complaints about inclusiveness. Some contributors can’t even
make
> the meeting we have, and another scheduled video meeting might produce a
> situation in which there is no any record of decisions made during the
video
> meeting. At least with IRC meetings there is always a log.

Occasionally a quick Google hangout is necessary in Nova in order to
quickly settle an outstanding issue so we can continue to make progress.
When that happens, the link is posted in the #openstack-nova channel, and
anyone who is interested can join. So while it’s not logged like an IRC
meeting, it’s no excluding anyone, and we can quickly remove roadblocks
that are harder to do in IRC.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New comer to Openstack.

2016-12-12 Thread joehuang
Hello, Akula,

You are welcome to join Tircircle project, please join us if you are interested 
in it: https://wiki.openstack.org/wiki/Tricircle


Best Regards
Chaoyi Huang (joehuang)

From: Gary Kotton [gkot...@vmware.com]
Sent: 11 December 2016 16:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New comer to Openstack.

Welcome aboard. Please see http://www.openstack.org/. A lot of the projects 
have developer guidelines. Maybe start there…

From: akula nagaraj reddy 
Reply-To: OpenStack List 
Date: Sunday, December 11, 2016 at 8:36 AM
To: OpenStack List 
Subject: [openstack-dev] New comer to Openstack.

Hi All,

Please let someone give how to start the things. I have moderate experience in 
Python .



Thanks and Regards
 Nagaraj R
+919538750652
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack New Years Resolutions

2016-12-12 Thread Jay Pipes

On 12/12/2016 06:40 PM, Nick Chase wrote:

OK, so if you were putting together New Year's Resolutions for OpenStack
development for 2017, what would they be?


My resolution will be to rewrite Nova in COBOL. Oh wait, no, that's for 
April 1st, not New Years...


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Re-defining network templates/isolation

2016-12-12 Thread Dan Sneddon
Responses inline...

- Original Message -
> On Mon, Dec 12, 2016 at 12:12:30PM -0500, Tim Rozet wrote:
> > Hello,
> > I wanted to get thoughts about re-thinking how users configure and create
> > new networks with OOO.  The current way to configure network settings for
> > a deployment requires creating nic + network environment templates, and
> > updating the network isolation resource registry.  I think a better
> > approach could consolidate all of the network settings for a deployment
> > into a single yaml file, and then parse that information to create the
> > appropriate nic and network env templates.  We do that in OPNFV Apex with
> > a combination of python and jinja2 using this unified template format:
> > 
> > https://github.com/opnfv/apex/blob/master/config/network/network_settings.yaml
> 
> Thanks for sharing, and for raising this issue Tim.
> 
> Strangely enough I was thinking along similar lines recently and I started
> hacking on some prototype code, just pushed here:
> 
> 
> https://review.openstack.org/#/c/409920
> https://review.openstack.org/#/c/409921
>
> That was originally related to fixing this bug where network isolation is
> a little inconvenient to use when defining custom roles:
> 
> https://bugs.launchpad.net/tripleo/+bug/1633090
> 
> Basically I agree we need some way to define per-network data that can then
> be consumed by jinja2 when we render templates for each role.

Coincidentally, I was working along similar lines in order to support multiple
routed subnets per network (for spine-and-leaf networking). In order to avoid
an exponential number of templates, it makes sense to work from a map of data
that defines the various networks, subnets, default routes, etc. that will
vary from rack to rack with routed networking, and then generate the
templates from that information.
 
> > Furthermore consider cdefining new networks in OOO.  Think about how much
> > is involved in creating a new network, subnet, port definition +
> > net_ip_map for that network, VIP. If you look at the tht/network
> > directory, almost all of the templates for ports and networks have the
> > exact same format.  I think you could make the example above dynamic so
> > that a user could define any new network there and the corresponding port,
> > network + subnet template files could be created on the fly.
> 
> Yes, I agree, this could be the next step after enabling the current
> networks for custom roles.  If we do the j2 implementation right for fixing
> the bug above, I think enabling arbitrary additional networks e.g via some
> j2 loops shouldn't be too much additional work.
>
> > I think this creates a much more simple interface for users by exposing
> > networking configuration they need, but also hiding redundant OOO/heat
> > template syntax they don't necessarily care about.  Thoughts?
> 
> So, yeah basically I agree - we should reduce the duplication between
> templates e.g for nic configuration, and j2 render them where possible for
> each role/network.
> 
> The trick here will be doing it so that we maintain backwards compatibility
> - if we're careful that's probably possible, but we'll have to figure out
>   ways to test that ensure we don't break existing users.

I agree that it is possible, and that it will require testing to ensure that
we maintain backwards compatibility.

> My suggestion would be to refactor things to resolve the bug above, and
> possibly also https://bugs.launchpad.net/tripleo/+bug/1625558 which I think
> should really be fixed by generating the nic configs, not adding even more
> example templates.

I am biased, since I wrote the patchset [1], but I would like to merge the
networker.yaml role and then can backport it prior to refactoring the
NIC configs. In general, though, I think we can limit the number of NIC
configs to one per physical topology, and then enable/disable interfaces,
VLANs, routes, etc. for each role based on network connectivity.

[1] - https://review.openstack.org/#/c/400390/

> If we can do some of that during the Ocata timefram, I expect fully
> composable/custom networks may be possible during Pike?
> 
> Steve


--
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] lxml gate issues

2016-12-12 Thread gordon chung
hi,

this is just a headsup, i'm fast approving my own magic[1] to fix gate 
of the error 'fatal error: libxml/xpath.h: No such file or directory'[2].

some notes:
- i don't know why it doesn't affect aodh (so far) even though aodh has 
the same lxml requirement
- there was a new lxml lib release recently and no changes to gate 
images but i didn't look any further to validate if lxml is real reason
- i copied the bindep entries from Nova's bindep.

now that you realise i'm 75% guessing. feel free to revert or adjust 
accordingly for potential future breakage in aodh (i really have no idea 
why it's cool over there).

[1] https://review.openstack.org/#/c/408063/
[2] 
http://logs.openstack.org/63/408063/4/check/gate-ceilometer-docs-ubuntu-xenial/90404dc/console.html#_2016-12-12_20_10_42_051348

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack New Years Resolutions

2016-12-12 Thread Nick Chase


OK, so if you were putting together New Year's Resolutions for OpenStack 
development for 2017, what would they be?



--
Nick Chase
Editor in Chief, OpenStack:Unlocked

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Pradeep Kilambi
On Mon, Dec 12, 2016 at 3:48 PM, Michele Baldessari 
wrote:

> Hi Pradeep,
>
> On Mon, Dec 12, 2016 at 02:51:59PM +0100, Giulio Fidente wrote:
> > On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:
> > >I would like to get some thoughts on $Subject. This came up when i was
> > >discussing the standalone roles for telemetry. Currently when we deploy
> > >redis in tripleo, its a pacemaker managed service. So if we were to
> > >deploy telemetry services on a dedicated node we could. But redis will
> > >have to be on a another node? (assuming we dont want to pull in
> > >pacemaker on to telemetry nodes).
>
> Ok so with the composable HA work [1] you should be able to split out
> the redis service on to dedicated nodes and these nodes can be either
> full pacemaker cluster members or only have the pacemaker-remote
> service.
>
> > currently redis instances are not configured as a redis cluster but use
> the
> > master/slave replication model instead and pacemaker is taking care of
> > electing/relocating the redis master as needed
> >
> > there shouldn't be any dependency on the redis profile for the telemetry
> > roles, they should instead just point at the redis_vip
> >
> > the redis_vip is always guaranteed (by haproxy) to point to the redis
> master
> >
> > >With most services moved out of pacemaker in Newton, I think its time to
> > >move redis as well? Are there any constraints in moving redis to be
> > >managed by systemd? Looking at how we do it, It should be easily movable
> > >to systemd? Can we consider doing this for Ocata?
> >
> > I think we could look at using the redis cluster which allows multiple
> > masters, but I am not sure this can happen in Ocata ... yet again, there
> > shouldn't be in the telemetry roles any dependency on redis itself
> >
> > if we were to use the cluster mode the only difference would probably be
> > that the redis_vip will start balancing requests across the nodes
>
> In general I am in favour to split out redis from pacemaker. There is
> the question that in theory we'd have two potentially separate quorums,
> but I think that with redis this should not be a big problem.
>
> Maybe let's start with a prototype and see how things look and iterate
> from there? I think it is a bit late for ocata, but we could at least
> start the work without changing the defaults (i.e. let the operator
> override the tripleo::service with a redis base profile instead of the
> pacemaker one)
>

Makes sense. I understand it might be too late for ocata. We don't really
have
any urgency so long as we can split out redis like you say we can with
composable HA.
I was more curious what the long term plan was and what you said makes
sense.

Thanks.
~ Prad


>
> Does that make sense,
> Michele
>
> [1] https://review.openstack.org/#/q/topic:bp/composable-ha
> --
> Michele Baldessari
> C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun][Glare][Glance] Building Docker images

2016-12-12 Thread Denis Makogon
Hello to All.

I’d like to get initial feedback on the idea of building Docker images
through Zun involving Glare as artifactory for all static components
required for image.

So, the idea here is in being capable to build a Docker image through Zun
API with storing all static data required for docker image building in
Glare or Swift. In order to keep the same UX from using Docker it would be
better to use Dockerfile as description format for image building.

In image creation process Glare could take role of artifactory, where users
stores, let’s say source code of his applications that would run in
containers, static data, etc. And those artifacts will be pulled during
image creation and used to inject into image (similar process of context
creation during Docker image building using native CLI). Please note that
artifacts are completely optional for images, but would give a capability
to keep artifacts in dedicated storage instead of transferring all data
through Zun API (the opposite concept to Docker build context).

Once image is created, it can be stored in underlying Docker in Zun or can
be published into Glance or Swift for further consumption (if user will
need to save image, he’ll use Glance image download API). I’ve mentioned
Swift VS Glance because Swift has concept of temp URLs that can be accessed
without being authorized. Such feature allows to use Swift as storage from
where possible to export image to Docker using Import API [1].


Any feedback on the idea is appreciated.

Kind regards,

Denis Makogon

[1] https://docs.docker.com/engine/reference/commandline/import/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Re-defining network templates/isolation

2016-12-12 Thread Steven Hardy
On Mon, Dec 12, 2016 at 12:12:30PM -0500, Tim Rozet wrote:
> Hello,
> I wanted to get thoughts about re-thinking how users configure and create new 
> networks with OOO.  The current way to configure network settings for a 
> deployment requires creating nic + network environment templates, and 
> updating the network isolation resource registry.  I think a better approach 
> could consolidate all of the network settings for a deployment into a single 
> yaml file, and then parse that information to create the appropriate nic and 
> network env templates.  We do that in OPNFV Apex with a combination of python 
> and jinja2 using this unified template format:
> 
> https://github.com/opnfv/apex/blob/master/config/network/network_settings.yaml

Thanks for sharing, and for raising this issue Tim.

Strangely enough I was thinking along similar lines recently and I started
hacking on some prototype code, just pushed here:


https://review.openstack.org/#/c/409920
https://review.openstack.org/#/c/409921

That was originally related to fixing this bug where network isolation is
a little inconvenient to use when defining custom roles:

https://bugs.launchpad.net/tripleo/+bug/1633090

Basically I agree we need some way to define per-network data that can then
be consumed by jinja2 when we render templates for each role.

> Furthermore consider cdefining new networks in OOO.  Think about how much is 
> involved in creating a new network, subnet, port definition + net_ip_map for 
> that network, VIP. If you look at the tht/network directory, almost all of 
> the templates for ports and networks have the exact same format.  I think you 
> could make the example above dynamic so that a user could define any new 
> network there and the corresponding port, network + subnet template files 
> could be created on the fly.

Yes, I agree, this could be the next step after enabling the current
networks for custom roles.  If we do the j2 implementation right for fixing
the bug above, I think enabling arbitrary additional networks e.g via some
j2 loops shouldn't be too much additional work.

> I think this creates a much more simple interface for users by exposing 
> networking configuration they need, but also hiding redundant OOO/heat 
> template syntax they don't necessarily care about.  Thoughts?

So, yeah basically I agree - we should reduce the duplication between
templates e.g for nic configuration, and j2 render them where possible for
each role/network.

The trick here will be doing it so that we maintain backwards compatibility
- if we're careful that's probably possible, but we'll have to figure out
  ways to test that ensure we don't break existing users.

My suggestion would be to refactor things to resolve the bug above, and
possibly also https://bugs.launchpad.net/tripleo/+bug/1625558 which I think
should really be fixed by generating the nic configs, not adding even more
example templates.

If we can do some of that during the Ocata timefram, I expect fully
composable/custom networks may be possible during Pike?

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Magnum_driver

2016-12-12 Thread Tim Hinrichs
That job is non-voting, which since you're adding a new datasource, you can
ignore.  Nothing to do now but wait for people to give you reviews.

Tim

On Mon, Dec 12, 2016 at 1:12 PM Ruben 
wrote:

> Hi Tim,
> thanks a lot for your help.
> I've seen that there is a failure for "gate-congress-pe-replicated-nv".
> It's strange because I didn't have failure before.
> So, what should I do now?
>
> Ruben
>
> - Messaggio originale -
> Da: "Tim Hinrichs" 
> A: "Ruben" 
> Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> timothy.l.hinri...@gmail.com>
> Inviato: Lunedì, 12 dicembre 2016 19:58:24
> Oggetto: Re: [Congress] Magnum_driver
>
> Hi Ruben,
>
> Looked like there were still multiple changes, so I squashed them into 1
> and fixed up the requirements.txt file.  (You should only need to add the
> python-magnumclient to the requirements.txt file.)  I also abandoned one
> that was incorporated into the one 1 fix.  Here it is.  Now the community
> should give you reviews.
>
> https://review.openstack.org/#/c/404222
>
> Tim
>
>
>
>
> On Fri, Dec 9, 2016 at 3:06 PM Ruben 
> wrote:
>
> > Hi Tim,
> > sorry for the late, but I've had a busy week.
> > Anyway, I've tried to add the magnum_driver to review into a single
> commit.
> > I don't know if I have been able..
> >
> > Ruben
> >
> > - Messaggio originale -
> > Da: "Tim Hinrichs" 
> > A: "Ruben" 
> > Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> > timothy.l.hinri...@gmail.com>
> > Inviato: Mercoledì, 30 novembre 2016 22:04:32
> > Oggetto: Re: [Congress] Magnum_driver
> >
> > Hi Ruben,
> >
> > What you're doing is correct.  The downside is that it creates a new
> commit
> > for every change you make, and all of those commits show up on gerrit.
> In
> > OpenStack (and other projects I've seen that use Gerrit for code reviews)
> > you squash those commits into 1 change so that it's easier for reviewers
> to
> > see the change as a whole.  (Projects that use Github for code reviews do
> > more like what you're doing now).  To see your
> >
> > Here's a blog showing you what to do...
> > https://ariejan.net/2011/07/05/git-squash-your-latests-commits-into-one/
> >
> > You can probably do
> >
> > $ git rebase -i
> >
> > and then follow the instructions in the blog that say you replace the
> > 'pick' for all the commits after the first with 'squash' (or 's' for
> > short).  So something like the following.
> >
> > pick f392171 Added new feature X squash ba9dd9a Added new elements to
> page
> > design squash df71a27 Updated CSS for new elements
> >
> > After that, you should be able to do ...
> >
> > $ git review
> >
> > Tim
> >
> > On Wed, Nov 30, 2016 at 5:23 AM Ruben 
> > wrote:
> >
> > > Hi Tim,
> > > what should I do to squash all the commits into a single one?
> > >
> > > To add the code to review I made:
> > >
> > > git add 
> > > git commit
> > > git review
> > >
> > > Isn't it correct?
> > >
> > > Ruben
> > >
> > > - Messaggio originale -
> > > Da: "Tim Hinrichs" 
> > > A: "Ruben" 
> > > Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> > > timothy.l.hinri...@gmail.com>
> > > Inviato: Mercoledì, 30 novembre 2016 2:34:22
> > > Oggetto: Re: [Congress] Magnum_driver
> > >
> > > Hi Ruben,
> > >
> > > I left a comment on one of the changes; after you take care of that
> I'll
> > > take a closer look at the code.  Let me know if you have questions.
> > >
> > > Tim
> > >
> > > On Tue, Nov 29, 2016 at 4:06 AM Ruben <
> r.manganiel...@studenti.unisa.it>
> > > wrote:
> > >
> > > > Hi Tim,
> > > > I've added the code of magnum_driver and its unit test to review.
> > > > It seems everything works.
> > > >
> > > > Ruben
> > > >
> > > > - Original Message -
> > > > From: "Tim Hinrichs" 
> > > > To: "Ruben" 
> > > > Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> > > > timothy.l.hinri...@gmail.com>
> > > > Sent: Saturday, November 26, 2016 12:48:12 AM
> > > > Subject: Re: [Congress] Magnum_driver
> > > >
> > > > Definitely push that code up into Gerrit so we can all take a look.
> > Data
> > > > like pods and containers is probably the most valuable data from
> > Magnum,
> > > so
> > > > I'd definitely recommend adding that.  But push the code you have to
> > > Gerrit
> > > > first.  (As long as you leave the ChangeId the same each time you
> push
> > to
> > > > Gerrit, Gerrit will keep all of the versions you pushed organized
> > > together,
> > > > yet keep the versions separate.)
> > > >
> > > > Tim
> > > >
> > > > On Fri, Nov 25, 2016 at 3:06 PM Ruben <
> > r.manganiel...@studenti.unisa.it>
> > > > wrote:
> > > >
> > > > > Hi Tim,
> > > > > You are great. It works! Thanks a lot!
> > > > > 

Re: [openstack-dev] [Congress] Magnum_driver

2016-12-12 Thread Ruben
Hi Tim,
thanks a lot for your help.
I've seen that there is a failure for "gate-congress-pe-replicated-nv".
It's strange because I didn't have failure before.
So, what should I do now?

Ruben

- Messaggio originale -
Da: "Tim Hinrichs" 
A: "Ruben" 
Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" 

Inviato: Lunedì, 12 dicembre 2016 19:58:24
Oggetto: Re: [Congress] Magnum_driver

Hi Ruben,

Looked like there were still multiple changes, so I squashed them into 1
and fixed up the requirements.txt file.  (You should only need to add the
python-magnumclient to the requirements.txt file.)  I also abandoned one
that was incorporated into the one 1 fix.  Here it is.  Now the community
should give you reviews.

https://review.openstack.org/#/c/404222

Tim




On Fri, Dec 9, 2016 at 3:06 PM Ruben 
wrote:

> Hi Tim,
> sorry for the late, but I've had a busy week.
> Anyway, I've tried to add the magnum_driver to review into a single commit.
> I don't know if I have been able..
>
> Ruben
>
> - Messaggio originale -
> Da: "Tim Hinrichs" 
> A: "Ruben" 
> Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> timothy.l.hinri...@gmail.com>
> Inviato: Mercoledì, 30 novembre 2016 22:04:32
> Oggetto: Re: [Congress] Magnum_driver
>
> Hi Ruben,
>
> What you're doing is correct.  The downside is that it creates a new commit
> for every change you make, and all of those commits show up on gerrit.  In
> OpenStack (and other projects I've seen that use Gerrit for code reviews)
> you squash those commits into 1 change so that it's easier for reviewers to
> see the change as a whole.  (Projects that use Github for code reviews do
> more like what you're doing now).  To see your
>
> Here's a blog showing you what to do...
> https://ariejan.net/2011/07/05/git-squash-your-latests-commits-into-one/
>
> You can probably do
>
> $ git rebase -i
>
> and then follow the instructions in the blog that say you replace the
> 'pick' for all the commits after the first with 'squash' (or 's' for
> short).  So something like the following.
>
> pick f392171 Added new feature X squash ba9dd9a Added new elements to page
> design squash df71a27 Updated CSS for new elements
>
> After that, you should be able to do ...
>
> $ git review
>
> Tim
>
> On Wed, Nov 30, 2016 at 5:23 AM Ruben 
> wrote:
>
> > Hi Tim,
> > what should I do to squash all the commits into a single one?
> >
> > To add the code to review I made:
> >
> > git add 
> > git commit
> > git review
> >
> > Isn't it correct?
> >
> > Ruben
> >
> > - Messaggio originale -
> > Da: "Tim Hinrichs" 
> > A: "Ruben" 
> > Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> > timothy.l.hinri...@gmail.com>
> > Inviato: Mercoledì, 30 novembre 2016 2:34:22
> > Oggetto: Re: [Congress] Magnum_driver
> >
> > Hi Ruben,
> >
> > I left a comment on one of the changes; after you take care of that I'll
> > take a closer look at the code.  Let me know if you have questions.
> >
> > Tim
> >
> > On Tue, Nov 29, 2016 at 4:06 AM Ruben 
> > wrote:
> >
> > > Hi Tim,
> > > I've added the code of magnum_driver and its unit test to review.
> > > It seems everything works.
> > >
> > > Ruben
> > >
> > > - Original Message -
> > > From: "Tim Hinrichs" 
> > > To: "Ruben" 
> > > Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> > > timothy.l.hinri...@gmail.com>
> > > Sent: Saturday, November 26, 2016 12:48:12 AM
> > > Subject: Re: [Congress] Magnum_driver
> > >
> > > Definitely push that code up into Gerrit so we can all take a look.
> Data
> > > like pods and containers is probably the most valuable data from
> Magnum,
> > so
> > > I'd definitely recommend adding that.  But push the code you have to
> > Gerrit
> > > first.  (As long as you leave the ChangeId the same each time you push
> to
> > > Gerrit, Gerrit will keep all of the versions you pushed organized
> > together,
> > > yet keep the versions separate.)
> > >
> > > Tim
> > >
> > > On Fri, Nov 25, 2016 at 3:06 PM Ruben <
> r.manganiel...@studenti.unisa.it>
> > > wrote:
> > >
> > > > Hi Tim,
> > > > You are great. It works! Thanks a lot!
> > > > I've also solved the problem with py27. The unit test seems to work.
> > > > The only thing that seems not to work is populate the
> 'clusters_links'
> > > and
> > > > 'cluster_templates_links' tables: they are empty.
> > > > Also, the 'labels' table is empty.
> > > > I've no errors anyway.
> > > > Are these problems according to you?
> > > >
> > > > Should I to try to add the translation of pods, containers and
> > services?
> > > >
> > > > I've add the code to review.
> > > >
> > > > Ruben
> > > > - Original Message 

Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Michele Baldessari
Hi Pradeep,

On Mon, Dec 12, 2016 at 02:51:59PM +0100, Giulio Fidente wrote:
> On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:
> >I would like to get some thoughts on $Subject. This came up when i was
> >discussing the standalone roles for telemetry. Currently when we deploy
> >redis in tripleo, its a pacemaker managed service. So if we were to
> >deploy telemetry services on a dedicated node we could. But redis will
> >have to be on a another node? (assuming we dont want to pull in
> >pacemaker on to telemetry nodes).

Ok so with the composable HA work [1] you should be able to split out
the redis service on to dedicated nodes and these nodes can be either
full pacemaker cluster members or only have the pacemaker-remote
service.

> currently redis instances are not configured as a redis cluster but use the
> master/slave replication model instead and pacemaker is taking care of
> electing/relocating the redis master as needed
> 
> there shouldn't be any dependency on the redis profile for the telemetry
> roles, they should instead just point at the redis_vip
> 
> the redis_vip is always guaranteed (by haproxy) to point to the redis master
> 
> >With most services moved out of pacemaker in Newton, I think its time to
> >move redis as well? Are there any constraints in moving redis to be
> >managed by systemd? Looking at how we do it, It should be easily movable
> >to systemd? Can we consider doing this for Ocata?
> 
> I think we could look at using the redis cluster which allows multiple
> masters, but I am not sure this can happen in Ocata ... yet again, there
> shouldn't be in the telemetry roles any dependency on redis itself
> 
> if we were to use the cluster mode the only difference would probably be
> that the redis_vip will start balancing requests across the nodes

In general I am in favour to split out redis from pacemaker. There is
the question that in theory we'd have two potentially separate quorums,
but I think that with redis this should not be a big problem.

Maybe let's start with a prototype and see how things look and iterate
from there? I think it is a bit late for ocata, but we could at least
start the work without changing the defaults (i.e. let the operator
override the tripleo::service with a redis base profile instead of the
pacemaker one)

Does that make sense,
Michele

[1] https://review.openstack.org/#/q/topic:bp/composable-ha
-- 
Michele Baldessari
C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] Video Meetings - input requested

2016-12-12 Thread Ed Leafe
On Dec 12, 2016, at 11:16 AM, Jeffrey Zhang  wrote:

> Some contributors in kolla have had unscheduled video meetings. This has
> resulted in complaints about inclusiveness. Some contributors can’t even make
> the meeting we have, and another scheduled video meeting might produce a
> situation in which there is no any record of decisions made during the video
> meeting. At least with IRC meetings there is always a log.

Occasionally a quick Google hangout is necessary in Nova in order to quickly 
settle an outstanding issue so we can continue to make progress. When that 
happens, the link is posted in the #openstack-nova channel, and anyone who is 
interested can join. So while it’s not logged like an IRC meeting, it’s no 
excluding anyone, and we can quickly remove roadblocks that are harder to do in 
IRC.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to DNS integration behavior?

2016-12-12 Thread Hayes, Graham
On 12/12/2016 19:52, Kimball, Conrad wrote:
> We are in the early phases of a project to deploy OpenStack and we face
> the question of DNS integration when launching a VM instance.
>
>
>
> We have found the DNS Integration documentation at
> http://docs.openstack.org/mitaka/networking-guide/config-dns-int.html,
> but to our understanding it doesn’t do what we want to do.
>
>
>
> Where is the best forum for discussing possible changes in this behavior?
>
>

This would be a change to neutron, so it would their bugs / RFE
process [0]

I added [neutron] to the subject to grab their attention.

0 - http://docs.openstack.org/developer/neutron/policies/blueprints.html

>
> - - -
>
>
>
> Specifically, we do not tie DNS domains to networks – any particular
> network may have VM ports with a variety of DNS domains, with the choice
> of DNS domain left to the person deploying the VM instance (we use DNS
> domains to indicate business unit association, infrastructure function,
> and so forth).
>
>
>
> So we would want the DNS integration to allow specifying both a dns_name
> and a dns_domain when creating a port.  The documentation link above
> says this is allowed for floating IPs, but not for ports – ports can
> specify only a dns_name and always inherit the dns_domain from the network.
>
>
>
> /Conrad Kimball/
>
> Associate Technical Fellow
>
> Chief Architect, Enterprise Cloud Services
>
> Engineering, Operations & Technology / Information Technology / Core
> Infrastructure Engineering
>
> conrad.kimb...@boeing.com 
>
> P.O. Box 3707, Mail Code 7M-TE
>
> Seattle, WA  98124-2207
>
> Bellevue 33-11 bldg, office 3A6-3.9
>
> Mobile:  425-591-7802
>
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Changes to DNS integration behavior?

2016-12-12 Thread Kimball, Conrad
We are in the early phases of a project to deploy OpenStack and we face the 
question of DNS integration when launching a VM instance.

We have found the DNS Integration documentation at 
http://docs.openstack.org/mitaka/networking-guide/config-dns-int.html, but to 
our understanding it doesn't do what we want to do.

Where is the best forum for discussing possible changes in this behavior?

- - -

Specifically, we do not tie DNS domains to networks - any particular network 
may have VM ports with a variety of DNS domains, with the choice of DNS domain 
left to the person deploying the VM instance (we use DNS domains to indicate 
business unit association, infrastructure function, and so forth).

So we would want the DNS integration to allow specifying both a dns_name and a 
dns_domain when creating a port.  The documentation link above says this is 
allowed for floating IPs, but not for ports - ports can specify only a dns_name 
and always inherit the dns_domain from the network.

Conrad Kimball
Associate Technical Fellow
Chief Architect, Enterprise Cloud Services
Engineering, Operations & Technology / Information Technology / Core 
Infrastructure Engineering
conrad.kimb...@boeing.com
P.O. Box 3707, Mail Code 7M-TE
Seattle, WA  98124-2207
Bellevue 33-11 bldg, office 3A6-3.9
Mobile:  425-591-7802

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] About k8s integration

2016-12-12 Thread Hongbin Lu
Shubham,

Thanks for the inputs. Let’s continue the discussion at the team meeting later 
today.

Best regards,
Hongbin

From: shubham sharma [mailto:shubham@gmail.com]
Sent: December-12-16 12:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] About k8s integration

Hi Hongbin,

Both of these 2 options have pros and cons :
1. If we create our own "pod like concept" in Zun, then:
-> It is duplicating the efforts for the features that are already available in 
a particular COE.
-> It requires huge efforts as compared to second option
-> When we create our own pods then how do we can't say that we are not 
competing with existing COEs
-> If we have to provide our own pods and similar features then, we will need 
to have some more features (eg: Replication Controller) to attract users for Zun

2. If we create proxy for k8s, then:
-> It will be specific to k8s and for other COEs also, we will need to do the 
same
-> We do not want to compete with any COE and that can be achieved by following 
this approach
-> Our approach of choosing k8s first might be questionable as in "why to 
support a specific COE first, why not swarm first"
-> Moreover behaving like proxy only doesn't makes much sense. Instead of using 
zun, they will use will native clis of respective COE. Zun will just add 
complexity for such users

Looking at above, still feel that we should have our own implementation of 
pods(or similar term for it) because that's where we should be heading for Zun. 
Zun can't be just for CRUD operations of containers. If we take a common subset 
of features from each COE and have them in Zun then our goal is clear that we 
want to have containers supported inside OpenStack without the need of any 
COEs. It sounds like competing with COEs but in fact we are just trying to make 
OpenStack friendly for the users/operators who wants to use containers 
inside/with OpenStack.
​Also,
​this design seems quite extensible i.e. we can easily integrate with other 
COEs without having users to worry which COE they are actually running.Same set 
of APIs can be used for any COE at the backend​.

​For required efforts,we need to think and plan.

Regards
Shubham

On Wed, Dec 7, 2016 at 6:26 AM, Hongbin Lu 
> wrote:
Hi all,

This is a continued discussion of the k8s integration blueprint [1]. Currently, 
Zun exposes a container-oriented APIs that provides service for end-users to 
operate on containers (i.e. CRUD). At the last team meeting, we discussed how 
to introduce k8s to Zun as an alternative to the Docker driver. There are two 
approaches that has been discussed:

1. Introduce the concept of Pod. If we go with this approach, an API endpoint 
(i.e. /pods) will be added to the Zun APIs. Both Docker driver and k8s driver 
need to implement this endpoint. In addition, all the future drivers need to 
implement this endpoint as well (or throw a NotImplemented exception). Some of 
our team members raised concerns about this approach. The main concern is that 
this approach will hide a lot of k8s-specific features (i.e. replication 
controller) or there will be a lot of work to bring all those features to Zun.

  $ zun pod-create … # this create a k8s pod (if k8s driver is used), or create 
a sandbox with a set of containers (if docker driver is used)
  $ zun create … # this create a k8s pod with one container, or create a 
sandbox with one container

2. Introduce a dedicated k8s endpoint that acts as a proxy to k8s APIs. This 
will expose all the k8s features but users won’t have a unified APIs across 
drivers.

  $ zun k8s pod create … # this create a k8s pod
  $ zun docker container create … # this create a docker container
  $ zun create … # the behavior of this command is unclear

So far, we haven’t decided which approach to use (or use a third approach), but 
we wanted to collect more feedback before making a decision. Thoughts?

[1] https://blueprints.launchpad.net/zun/+spec/k8s-integration

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas] [octavia]vip failed to be plugged in to the amphorae vm

2016-12-12 Thread Wanjing Xu (waxu)
Losnik

There are a lot, a lot  of retries. I just omit them in email.  So how to fix 
this vip plug error?

Thanks
Wanjing
From: "Kosnik, Lubosz" 
Date: Friday, December 9, 2016 at 4:38 PM
To: "Wanjing Xu (waxu)" 
Cc: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [neutron-lbaas] [octavia]vip failed to be plugged 
in to the amphorae vm

Plugging VIP worked without any problems.
Log is telling that you have very restrictive timeout configuration. 7 retries 
is very low configuration. Please reconfigure this to much bigger value.

Regards,
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Dec 9, 2016, at 3:46 PM, Wanjing Xu (waxu) 
> wrote:

I have stable/metaka Octavia which has been running OK until today, whenever I 
created loadbalancer, the amphorae vm is created with mgmt nic. But look like 
vip plugin failed.  I can ping to amphorae mgmt. NIC from controller(where 
Octavia process is running), but look like some rest api call  into amphorae to 
plug in vip failed :

Ping works:

[localadmin@dmz-eth2-ucs1]logs> ping 192.168.0.7
PING 192.168.0.7 (192.168.0.7) 56(84) bytes of data.
64 bytes from 192.168.0.7: icmp_seq=1 ttl=64 time=1.11 ms
64 bytes from 192.168.0.7: icmp_seq=2 ttl=64 time=0.461 ms
^C


o-cw.log:

2016-12-09 11:03:54.468 31408 DEBUG 
octavia.controller.worker.tasks.network_tasks [-] Retrieving network details 
for amphora ae80ae54-395f-4fad-b0de-39f17dd9b19e execute 
/opt/stack/octavia/octavia/controller/worker/tasks/network_tasks.py:380
2016-12-09 11:03:55.441 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' 
(76823522-b504-4d6a-8ba7-c56015cb39a9) transitioned into state 'SUCCESS' from 
state 'RUNNING' with result '{u'ae80ae54-395f-4fad-b0de-39f17dd9b19e': 
}' 
_task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-12-09 11:03:55.444 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraPostVIPPlug' 
(3b798537-3f20-46a3-abe2-a2c24c569cd9) transitioned into state 'RUNNING' from 
state 'PENDING' _task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:218
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
https://192.168.0.7:9443/0.5/plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:221
2016-12-09 11:03:55.452 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:56.458 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:57.462 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:58.466 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:59.470 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:00.474 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:02.487 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
……
ransitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
2016-12-09 11:29:10.509 31408 WARNING 
octavia.controller.worker.controller_worker [-] Flow 
'post-amphora-association-octavia-post-loadbalancer-amp_association-subflow' 
(f7b0d080-830a-4d6a-bb85-919b6461252f) transitioned into state 'REVERTED' from 
state 'RUNNING'
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher [-] Exception 
during message handling: contacting the amphora timed out
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
185, in _dispatch
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, 

Re: [openstack-dev] [Congress] Magnum_driver

2016-12-12 Thread Tim Hinrichs
Hi Ruben,

Looked like there were still multiple changes, so I squashed them into 1
and fixed up the requirements.txt file.  (You should only need to add the
python-magnumclient to the requirements.txt file.)  I also abandoned one
that was incorporated into the one 1 fix.  Here it is.  Now the community
should give you reviews.

https://review.openstack.org/#/c/404222

Tim




On Fri, Dec 9, 2016 at 3:06 PM Ruben 
wrote:

> Hi Tim,
> sorry for the late, but I've had a busy week.
> Anyway, I've tried to add the magnum_driver to review into a single commit.
> I don't know if I have been able..
>
> Ruben
>
> - Messaggio originale -
> Da: "Tim Hinrichs" 
> A: "Ruben" 
> Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> timothy.l.hinri...@gmail.com>
> Inviato: Mercoledì, 30 novembre 2016 22:04:32
> Oggetto: Re: [Congress] Magnum_driver
>
> Hi Ruben,
>
> What you're doing is correct.  The downside is that it creates a new commit
> for every change you make, and all of those commits show up on gerrit.  In
> OpenStack (and other projects I've seen that use Gerrit for code reviews)
> you squash those commits into 1 change so that it's easier for reviewers to
> see the change as a whole.  (Projects that use Github for code reviews do
> more like what you're doing now).  To see your
>
> Here's a blog showing you what to do...
> https://ariejan.net/2011/07/05/git-squash-your-latests-commits-into-one/
>
> You can probably do
>
> $ git rebase -i
>
> and then follow the instructions in the blog that say you replace the
> 'pick' for all the commits after the first with 'squash' (or 's' for
> short).  So something like the following.
>
> pick f392171 Added new feature X squash ba9dd9a Added new elements to page
> design squash df71a27 Updated CSS for new elements
>
> After that, you should be able to do ...
>
> $ git review
>
> Tim
>
> On Wed, Nov 30, 2016 at 5:23 AM Ruben 
> wrote:
>
> > Hi Tim,
> > what should I do to squash all the commits into a single one?
> >
> > To add the code to review I made:
> >
> > git add 
> > git commit
> > git review
> >
> > Isn't it correct?
> >
> > Ruben
> >
> > - Messaggio originale -
> > Da: "Tim Hinrichs" 
> > A: "Ruben" 
> > Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> > timothy.l.hinri...@gmail.com>
> > Inviato: Mercoledì, 30 novembre 2016 2:34:22
> > Oggetto: Re: [Congress] Magnum_driver
> >
> > Hi Ruben,
> >
> > I left a comment on one of the changes; after you take care of that I'll
> > take a closer look at the code.  Let me know if you have questions.
> >
> > Tim
> >
> > On Tue, Nov 29, 2016 at 4:06 AM Ruben 
> > wrote:
> >
> > > Hi Tim,
> > > I've added the code of magnum_driver and its unit test to review.
> > > It seems everything works.
> > >
> > > Ruben
> > >
> > > - Original Message -
> > > From: "Tim Hinrichs" 
> > > To: "Ruben" 
> > > Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> > > timothy.l.hinri...@gmail.com>
> > > Sent: Saturday, November 26, 2016 12:48:12 AM
> > > Subject: Re: [Congress] Magnum_driver
> > >
> > > Definitely push that code up into Gerrit so we can all take a look.
> Data
> > > like pods and containers is probably the most valuable data from
> Magnum,
> > so
> > > I'd definitely recommend adding that.  But push the code you have to
> > Gerrit
> > > first.  (As long as you leave the ChangeId the same each time you push
> to
> > > Gerrit, Gerrit will keep all of the versions you pushed organized
> > together,
> > > yet keep the versions separate.)
> > >
> > > Tim
> > >
> > > On Fri, Nov 25, 2016 at 3:06 PM Ruben <
> r.manganiel...@studenti.unisa.it>
> > > wrote:
> > >
> > > > Hi Tim,
> > > > You are great. It works! Thanks a lot!
> > > > I've also solved the problem with py27. The unit test seems to work.
> > > > The only thing that seems not to work is populate the
> 'clusters_links'
> > > and
> > > > 'cluster_templates_links' tables: they are empty.
> > > > Also, the 'labels' table is empty.
> > > > I've no errors anyway.
> > > > Are these problems according to you?
> > > >
> > > > Should I to try to add the translation of pods, containers and
> > services?
> > > >
> > > > I've add the code to review.
> > > >
> > > > Ruben
> > > > - Original Message -
> > > > From: "Tim Hinrichs" 
> > > > To: "Ruben" 
> > > > Cc: openstack-dev@lists.openstack.org, "timothy l hinrichs" <
> > > > timothy.l.hinri...@gmail.com>
> > > > Sent: Friday, November 25, 2016 10:36:29 PM
> > > > Subject: Re: [Congress] Magnum_driver
> > > >
> > > > Hi Ruben,
> > > >
> > > > Glad you got that worked out.  Once in a while I end up deleting my
> > .tox
> > > > dir because it gets out of date.  I guess 

[openstack-dev] [nova][bugs] Nova Bugs Team Meeting this Tuesday at 1800 UTC

2016-12-12 Thread Augustina Ragwitz
The next Nova Bugs Team meeting will be Tuesday, December 13 at 1800UTC
in #openstack-meeting-4

http://www.timeanddate.com/worldclock/fixedtime.html?iso=20161213T18

Feel free to add to the meeting agenda: 
https://wiki.openstack.org/wiki/Meetings/Nova/BugsTeam

-- 
Augustina Ragwitz
Señora Software Engineer
---
Ask me about contributing to OpenStack Nova!
https://wiki.openstack.org/wiki/Nova/Mentoring

Waiting for your change to get through the gate? Clean up some Nova
bugs!
http://45.55.105.55:8082/bugs-dashboard.html
---
email: aragwitz+n...@pobox.com
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Network Configuration in TripleO UI

2016-12-12 Thread Dan Sneddon
On 12/08/2016 06:05 AM, Jiri Tomasek wrote:
> Hi all,
> 
> I've been investigating how to implement TripleO network configuration
> in TripleO UI. Based on my findings I'd like to propose a solution.
> 
> tl;dr proposal: Slightly refactor Network environment files to match
> GUI usage, Use Jinja Templating to generate dynamic parts of the
> templates/environments
> 
> 
> # Overview
> 
> I've used Ben Nemec's amazing Network template generator as a reference
> to help me understand how the network configuration works [1]. In
> general the process of configuring the network in TripleO is:
> 
> Define which Networks we intend to use -> Assign Roles to the Networks
> (+ Assign Role Services to the Network) -> Generate NIC config
> templates based on previous information
> 
> 
> # Deeper dive into templates
> 
> We currently have 2 environment files in THT [2] which define network
> configuration:
> 
> network-environment.yaml [3] - holds the information on NIC
> configuration for each Role using
> OS::TripleONet::SoftwareConfig resource + related
> parameter configuration
> 
> network-isolation.yaml [4]
> - defines the list of networks using
> OS::TripleO::Network:: resource
> - defines ports configuration for each network using
> OS::TripleO::Network::Ports::VipPort (note that both
> resources point to the static templates - those templates don't require
> any manual modification)
> - holds  Roles - Networks assignment using
> OS::TripleOPorts::Port for each role and
> storage (again, templates referenced by those resources don't require
> any modification)
> 
> User is intended to go ahead and modify those environments and provide
> NIC config templates to achieve a network configuration that matches
> his needs.
> 
> 
> # How GUI works
> 
> Before proceeding to proposed changes I need to describe briefly how
> TripleO UI works. TripleO UI is using THT as a source of truth, which
> means that it is trying not to add any additional business logic or
> manipulate templates. Rather it uses environment files as a 'features'
> which user can enable or disable depending on the needs of the
> deployment. The information about inter-environment relationships is
> tracked in capabilities-map.yaml which is also part of the THT. Based
> on these choices, UI allows user to configure parameters for those
> features. The parameter values and information about which environments
> are selected is stored in mistral environment. This approach leaves the
> plan templates intact. Huge benefit of this approach is that UI (or
> tripleo-common) does not need to hold explicit business logic related
> to certain deployment features as it is purely driven by THT. Also
> Adding a new feature involves only providing the templates/environments
> and it automatically appears as an option in UI.
> 
> To achieve best user experience while using this approach, the
> environment files need to be defined in a granular manner, so they
> don't require user to modify them and each describe an isolated 'feature'.
> 
> Roles and Network Configuration are exceptions to this concept as they
> require modification/generation of the templates/environments and
> therefore they use Jinja templating to achieve that.
> 
> 
> # The proposal
> 
> So having described previous, here is the approach I think we should
> use to achieve network configuration using TripleO UI:
> 
> 1. Put networks definitions into separate environment for each network:
> - this way GUI can provide a list of networks available to use and let
> user select which of them he wants to use. These environments are not
> dynamic and if user wants to add a new network, he does so by creating
> new templates and environment for it. UI also provides means to
> configure parameters for each network at this point (if needed).
> 
> For example the environment for a Storage Network looks like this:
> 
> resource_registry:
>   OS::TripleO::Network::Storage: ../network/storage.yaml
>   OS::TripleO::Network::Ports::StorageVipPort:
> ../network/ports/storage.yaml
> 
> 2. Assign Roles to Networks
> Having the Networks selected as well as Roles defined, TripleO UI
> provides user with means to assign Roles to Networks. This step
> involves generating the network-environment.yaml file. So TripleO UI
> sends the mapping of roles to network in json format to tripleo-common
> which in turn uses network-isolation.j2.yaml Jinja template to generate
> the environment file. I expect that pre-defined network-isolation.yaml
> will be included in default plan so the user does not need to start
> from scratch. Tripleo-common also provides an action to fetch
> network-roles assignment data by parsing the network-isolation.yaml
> 
> In addition, user is able to assign individual Role Services to a
> Network. ServiceNetMap parameter is currently used for this. GUI needs
> to make sure that it represents Services-Networks assignment grouped by
> Role so it is ensured that user assigns Services to only 

Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Sean McGinnis
On Mon, Dec 12, 2016 at 07:58:17AM +0100, Mehdi Abaakouk wrote:
> Hi,
> 
> I have recently seen that drbdmanage python library is no more GPL2 but
> need a end user license agreement [1].
> 
> Is this compatible with the driver policy of Cinder ?
> 
> [1] 
> http://git.drbd.org/drbdmanage.git/commitdiff/441dc6a96b0bc6a08d2469fa5a82d97fc08e8ec1
> 
> Regards
> 
> -- 
> Mehdi Abaakouk

It has been pointed out that this change of license no longer qualifies
this driver to have CI run by openstack-infra. It is a requirement that
all Cinder backend drivers have a running third party CI to validate all
patches.

A new CI will need to be set up for this driver ASAP. If that is not
possible, the driver will be marked as not supported in this release and
removed in Pike.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-12 Thread John Villalovos
On Mon, Dec 12, 2016 at 2:45 AM, Thierry Carrez 
wrote:

> There was a thread in the past discussing renaming -alt to -2 but it
> concluded that it was not worth the hassle (losing the 489 people
> hanging there).
>

I was thinking maybe #openstack-meeting-alt could be dedicated in the
future to "ALTernating" meetings. Just a random thought :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Deep dive session about the UI - January 12

2016-12-12 Thread Ana Krivokapic
Hi Everyone,

On the 12th of January 2017, I'll lead a TripleO deep dive[1] session on
how to contribute to the TripleO UI. Hope to see many of you there!


[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics

-- 
Regards,
Ana Krivokapic
Senior Software Engineer
OpenStack team
Red Hat Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2016-12-12 Thread Loo, Ruby
Hi,

We are dazzled to present this week's priorities and subteam report for Ironic. 
As usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. attach/detach: review code: https://review.openstack.org/#/c/327046/
2. portgroup: review code https://review.openstack.org/335964, then 
https://review.openstack.org/398417
3. driver composition: the next patch introduces hardware types: 
https://review.openstack.org/336626
4. next notifications: review code for CRUD notifications: 
https://review.openstack.org/#/c/356541/
5. boot from volume: next up: 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691

Bugs (dtantsur)
===
- Stats (diff between 05 Dec 2016 and 12 Dec 2016)
- Ironic: 217 bugs (-2) + 231 wishlist items (+4). 7 new (-9), 183 in progress 
(+12), 3 critical (+3), 29 high (-1) and 29 incomplete (+1)
- Inspector: 11 bugs (-3) + 22 wishlist items. 0 new (-3), 12 in progress (+1), 
0 critical, 1 high and 4 incomplete (+1)
- Nova bugs with Ironic tag: 10. 0 new, 0 critical, 0 high (-1)
- Critical bugs are related to ironicclient CI issues
- Thanks a lot JayF for your bug triaging effort

Portgroups support (sambetts, vdrok)

* trello: https://trello.com/c/KvVjeK5j/29-portgroups-support
- status as of most recent weekly meeting:
- portgroups patches need reviews: 
https://review.openstack.org/#/q/topic:bug/1618754
- including the client!

CI refactoring (dtantsur, lucasagomes)
==
* trello: https://trello.com/c/c96zb3dm/32-ci-refactoring
- status as of most recent weekly meeting:
- (lucasagomes) No updates

Rolling upgrades and grenade-partial (rloo, jlvillal)
=
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- spec was merged last week: https://review.openstack.org/299245
- RFE/patch for online-db-migration has some reviews. Need more details: 
https://bugs.launchpad.net/ironic/+bug/1585141
- Testing work:
- Tempest "smoke" tests are now passing for multi-node/multi-tenant 
with various patches!
- Grenade patch has one +2: https://review.openstack.org/407502
- Sort of blocked by tempest becoming a plugin.
- Work is ongoing for enabling Grenade with multi-tenant: 
https://review.openstack.org/389268
- Work on-going to get tempest "smoke" test working for the 
multi-node/multi-tenant job(vsaienko)

Security groups (jroll)
===
* trello: 
https://trello.com/c/klty7VVo/30-security-groups-for-provisioning-cleaning-network
- status as of most recent weekly meeting:
- last patch, documentation, merged last week: 
https://review.openstack.org/#/c/393962/

Interface attach/detach API (sambetts)
==
* trello: https://trello.com/c/nryU4w58/39-interface-attach-detach-api
- status as of most recent weekly meeting:
- Spec merged and Nova BP approved
- Ironic patch up for review: 
https://review.openstack.org/#/q/topic:bug/1582188
- Patches need updating still:
- Nova - https://review.openstack.org/364413
- IronicClient - https://review.openstack.org/364420

Generic boot-from-volume (TheJulia)
===
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- API side changes for volume connector information have a procedural -2 
until we can begin making use of the data in the conductor, but should stil be 
reviewed
- https://review.openstack.org/#/c/214586/
- Boot from volume/storage interface patches will be rebased this Monday.
- 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691

Driver composition (dtantsur)
=
* trello: https://trello.com/c/fTya14y6/14-driver-composition
- gerrit topic: https://review.openstack.org/#/q/status:open+topic:bug/1524745
- status as of most recent weekly meeting:
- still on review: the patch introducing hardware types defining and 
loading: https://review.openstack.org/336626
- new on review: generic hardware types: https://review.openstack.org/400678
- Maybe 4-6 more patches to go before work is completed.

Rescue mode (JayF)
==
* trello: https://trello.com/c/PwH1pexJ/23-rescue-mode
- status as of most recent weekly meeting:
- patch for API/Conductor methods needs review: 
https://review.openstack.org/#/c/350831/

etags in the REST API (gzholtkevych)

* trello: https://trello.com/c/MbNA4geB/33-rest-api-etags
- status as of most recent weekly meeting:
- (gzholtkevych) spec needs 

Re: [openstack-dev] [kolla][tc] Video Meetings - input requested

2016-12-12 Thread Matthew Treinish
On Tue, Dec 13, 2016 at 01:16:13AM +0800, Jeffrey Zhang wrote:
> TC
> ​,
> ​
> Some contributors in kolla have had unscheduled video meetings. This has
> resulted in complaints about inclusiveness. Some contributors can’t even
> make
> the meeting we have, and another scheduled video meeting might produce a
> situation in which there is no any record of decisions made during the video
> meeting. At least with IRC meetings there is always a log.
> 
> One solution is to schedule these meetings and have two 1 hour meetings per
> week.
> 
> As the PTL while Michal is moving, I have trouble following these video
> meetings since English isn’t my native language. Can you offer any advice
> for
> our project?
> 

Well one of is 4 opens open community specifically calls out having official
meetings over irc. [1] It's also a requirement for OpenStack projects to have
meetings on irc where they're logged. [2] If these video meetings are being used
to make decisions and there is no discussion of it in on the ML or via an 
official
irc meeting then that's a problem. (for the reasons you've outlined)

This basic topic was also discussed before in the thread starting here:

http://lists.openstack.org/pipermail/openstack-dev/2015-February/056551.html

As Flavio said there I don't think we can (or should?) prevent people from
having ad-hoc calls or video chats to work through an issue. They can be quite
valuable to work through a disagreement or other problem with high bandwidth
communication. But, that by itself should never be definitive discussion or
used in lieu of an open communication mechanism to make decisions in the
community. Whatever is discussed in these has to go through the normal
open communication mechanisms we use in the community before you can act upon
them.

I'm not really familiar with the full scope of these video meetings Kolla is
having (this is the first I've heard of them) but based on your description it
sounds like they are encroaching on violating the open community requirement
for projects. I think this is especially true if you're using the video
meetings as a replacement for irc meetings. But, without knowing all the
details I can't say for certain.


-Matt Treinish

[1] https://governance.openstack.org/tc/reference/opens.html#open-community
[2] https://governance.openstack.org/tc/reference/new-projects-requirements.html


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Dec 14th kuryr-kubernetes syncup

2016-12-12 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

December 14th at 11:00 UTC we'll be having a video meeting [1] to sync
about the current Kubernetes integration. There is an etherpad for the
topics that we'll be covering [2]. Feel free to add topics and +1 the
topics that you want to have discussion on.

The objective of the meeting is to come to decision about short term design
and implementation. If there is time, we'll also kick off design talk on
some longer term items.

Regards,

Toni


[1]
https://plus.google.com/hangouts/_/calendar/aXJlbmFiLmRldkBnbWFpbC5jb20.2kstghq4tavlmnhkfpnrjqd3j4
[2] https://etherpad.openstack.org/p/kuryr-kubernetes-dec14-syncup
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][tc] Video Meetings - input requested

2016-12-12 Thread Jeffrey Zhang
TC
​,
​
Some contributors in kolla have had unscheduled video meetings. This has
resulted in complaints about inclusiveness. Some contributors can’t even
make
the meeting we have, and another scheduled video meeting might produce a
situation in which there is no any record of decisions made during the video
meeting. At least with IRC meetings there is always a log.

One solution is to schedule these meetings and have two 1 hour meetings per
week.

As the PTL while Michal is moving, I have trouble following these video
meetings since English isn’t my native language. Can you offer any advice
for
our project?

 --
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Re-defining network templates/isolation

2016-12-12 Thread Tim Rozet
Hello,
I wanted to get thoughts about re-thinking how users configure and create new 
networks with OOO.  The current way to configure network settings for a 
deployment requires creating nic + network environment templates, and updating 
the network isolation resource registry.  I think a better approach could 
consolidate all of the network settings for a deployment into a single yaml 
file, and then parse that information to create the appropriate nic and network 
env templates.  We do that in OPNFV Apex with a combination of python and 
jinja2 using this unified template format:

https://github.com/opnfv/apex/blob/master/config/network/network_settings.yaml

Furthermore consider cdefining new networks in OOO.  Think about how much is 
involved in creating a new network, subnet, port definition + net_ip_map for 
that network, VIP. If you look at the tht/network directory, almost all of the 
templates for ports and networks have the exact same format.  I think you could 
make the example above dynamic so that a user could define any new network 
there and the corresponding port, network + subnet template files could be 
created on the fly.

I think this creates a much more simple interface for users by exposing 
networking configuration they need, but also hiding redundant OOO/heat template 
syntax they don't necessarily care about.  Thoughts?

Tim Rozet
Red Hat SDN Team


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Jeremy Stanley
On 2016-12-12 16:44:53 + (+), Duncan Thomas wrote:
[...]
> Having read the Openstack rules linked to earlier in the thread
> ( https://governance.openstack.org/tc/reference/licensing.html )
> we're clearly violating that.
[...]

Keep in mind that those guidelines were drafted in collaboration
with legal counsel for the foundation, and so do not merely
represent community ideals we strive to meet but actual legal
obligations to avoid license incompatibilities where driver
interfaces can imply derivative works. There's been some
cross-project review of driver commonalities/differences underway to
inform a more structured legal discussion. It's not complete yet as
far as I know, but expect the broader discussion on it to resume
fairly soon.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Ash
Agreed. Just saying that if the software is important to the community, but
the distribution/licensing terms are not, there's always a solution. That's
all I was trying to get at. If, however, resources don't avail themselves,
that can also be indicative that the need vs issue isn't overwhelming.

Increasingly, I am asking these questions of people who are willing to
replace CAPX solutions with high internal OPEX. The answer continues to be
that they've made their decision to alleviate lock-in. But there's always a
trade-off, as you've just highlighted.

On Mon, Dec 12, 2016 at 8:47 AM, Duncan Thomas 
wrote:

> On 12 December 2016 at 16:35, Ash  wrote:
>
>> I tend to agree with you, Sean. Also, if there's a concern that some
>> project has changed its license, then just create a fork. In the case of
>> this previously GPL code, it will at least be re-distributable. In the end,
>> I just don't think this is a huge issue that cannot be easily managed.
>>
>
> Creating a fork is easy. Maintaining a fork against bitrot, and managing
> the drift between the 'official' version and the fork, is a task that
> requires resources that are hard to find.
>
> We've put up patches to remove (At least)  two drivers for exactly this
> sort of switch before, and I think it was the right thing to do then and
> now.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Duncan Thomas
On 12 December 2016 at 16:35, Ash  wrote:

> I tend to agree with you, Sean. Also, if there's a concern that some
> project has changed its license, then just create a fork. In the case of
> this previously GPL code, it will at least be re-distributable. In the end,
> I just don't think this is a huge issue that cannot be easily managed.
>

Creating a fork is easy. Maintaining a fork against bitrot, and managing
the drift between the 'official' version and the fork, is a task that
requires resources that are hard to find.

We've put up patches to remove (At least)  two drivers for exactly this
sort of switch before, and I think it was the right thing to do then and
now.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Duncan Thomas
On 12 December 2016 at 16:14, Sean McGinnis  wrote:

>
> Honestly, my opinion is it's just fine as it is, and the fact that this
> license has changed doesn't make any difference.
>
> For most external storage there is _something_ that the deployer needs
> to do outside of install and configure OpenStack to get things set up
> and working. Whether that is setting up an physical array or downloading
> and installing a client library on their own - that's just part of the
> requirements for whatever solution they chose to deploy.
>
> It would be great if things were all open and an all in one
> download->install->run solution, but that's not reality and not what
> everyone is looking for out of OpenStack. So be it.
>
>

I'm going to respectfully but forcefully disagree here, and even go so far
as to suggest that the failing of Openstack is that people /do/ want that,
and Openstack is, in many areas (not just cinder) simply unable to provide
such a solution.

I'm willing to bet you can't find a customer who says "yes, we want to mess
around with downloading things from different sources, worrying about
versions, keeping copies of things in case companies decide to take their
portal down... oh and figuring out how to get those onto my nodes is great
fun, we'll have a double helping of that please." That is, frankly,
nonsense. Sure some people might put up with it, but I don't think anybody
wants it.

Having read the Openstack rules linked to earlier in the thread (
https://governance.openstack.org/tc/reference/licensing.html) we're clearly
violating that.

Having worked to try to build a turnkey Openstack distro, I can say with
authority that the cinder soft dependencies are absolutely a obstacle, and
in some cases (like customers who want a fully offline/airgapped install)
an insurmountable one.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Ash
I tend to agree with you, Sean. Also, if there's a concern that some
project has changed its license, then just create a fork. In the case of
this previously GPL code, it will at least be re-distributable. In the end,
I just don't think this is a huge issue that cannot be easily managed.

On Mon, Dec 12, 2016 at 8:14 AM, Sean McGinnis 
wrote:

> On Mon, Dec 12, 2016 at 03:07:23PM +, Duncan Thomas wrote:
> > On 12 December 2016 at 14:55, Andreas Jaeger  wrote:
> >
> > >
> > > So, what are the steps forward here? Requiring a non-free library like
> > > drbdmanage is not acceptable AFAIU,
> > >
> >
> > This is pretty much where things went dead at the summit - there were
> > various degrees of unacceptability (I was personally bothered by the the
> > parts that can't be freely redistributed, rather than free software per
> se,
> > but that still leaves a large number of problem cases. Few people were
> > willing to seriously consider pulling 1/3 of the cinder drivers out, and
> > there was not AFAICT a firm conclusion.
>
> Honestly, my opinion is it's just fine as it is, and the fact that this
> license has changed doesn't make any difference.
>
> For most external storage there is _something_ that the deployer needs
> to do outside of install and configure OpenStack to get things set up
> and working. Whether that is setting up an physical array or downloading
> and installing a client library on their own - that's just part of the
> requirements for whatever solution they chose to deploy.
>
> It would be great if things were all open and an all in one
> download->install->run solution, but that's not reality and not what
> everyone is looking for out of OpenStack. So be it.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Dev Digest Dec 3-9

2016-12-12 Thread Kendall Nelson
Hello Everyone :)

HTML Version: http://wp.me/p1eUil-251

Updates:

   -

   Nova placement/resource providers update with some discussions on
   aggregates and API [4]
   -

   New Nova core reviewer: Stephen Finucane [8]
   -

   Project mascots are all around the mailing list, search for “logo” in
   the subject to find them
   -

   Status update on unsupported Ironic drivers [10]
   -

   The DefCore Committee is now called Interop Working Group [11]


Creating a New IRC Meeting Room [9]

   -

   Create a new channel: #openstack-meeting-5
   -

   Generally recommend project teams to use the meeting channels on Freenode
   -

   Let projects use their channels for the meetings, but only if the
   channel is logged
   -

   As a next step limit the official meeting rooms for official projects
   and have non-official projects using their own IRC channels


Neutron Trunk port feature

   -

   Clarifying some usability aspects [1]
   -

   Performance measurements [2]


Ocata Bugsmash Day [3]

   -

   Thanks to Huawei and Intel and all the attendees to make it happen
   -

   Let’s keep the tradition and grow the event further if we can


PTG Travel Support Program [5][6]

   -

   Deadline of the first phase is this week
   -

   Phase two deadline is January 15th
   -

   Also reminding you to register to the event if you can come, but haven’t
   done it yet [7]


Finish test job transition to Ubuntu Xenial [12]

   -

   Merged at last! [13]
   -

   A lot of experimental and non votings jobs had to be updated
   -

   Changes to Master no longer run on trusty
   -

   Might have missed things still, so keep a look out


[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108530.html

[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108460.html

[3]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108538.html

[4]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108395.html

[5]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108645.html

[6] https://openstackfoundation.formstack.com/forms/travelsupportptg_atlanta

[7] https://pikeptg.eventbrite.com/

[8]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108520.html

[9]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108360.html

[10]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/108624.html

[11] http://lists.openstack.org/pipermail/openstack-dev/201
6-December/108673.html

[12]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/106906.html

[13] https://review.openstack.org/#/c/348078

Enjoy!

-Kendall Nelson (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Sean McGinnis
On Mon, Dec 12, 2016 at 03:07:23PM +, Duncan Thomas wrote:
> On 12 December 2016 at 14:55, Andreas Jaeger  wrote:
> 
> >
> > So, what are the steps forward here? Requiring a non-free library like
> > drbdmanage is not acceptable AFAIU,
> >
> 
> This is pretty much where things went dead at the summit - there were
> various degrees of unacceptability (I was personally bothered by the the
> parts that can't be freely redistributed, rather than free software per se,
> but that still leaves a large number of problem cases. Few people were
> willing to seriously consider pulling 1/3 of the cinder drivers out, and
> there was not AFAICT a firm conclusion.

Honestly, my opinion is it's just fine as it is, and the fact that this
license has changed doesn't make any difference.

For most external storage there is _something_ that the deployer needs
to do outside of install and configure OpenStack to get things set up
and working. Whether that is setting up an physical array or downloading
and installing a client library on their own - that's just part of the
requirements for whatever solution they chose to deploy.

It would be great if things were all open and an all in one
download->install->run solution, but that's not reality and not what
everyone is looking for out of OpenStack. So be it.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/glance failed

2016-12-12 Thread Ian Cordasco
 

-Original Message-
From: Andreas Jaeger 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: December 12, 2016 at 01:39:17
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [Release-job-failures] Release of 
openstack/glance failed

> On 2016-12-12 08:34, Andreas Jaeger wrote:
> > On 2016-12-12 06:20, Tony Breeds wrote:
> >> On Mon, Dec 12, 2016 at 04:44:18AM +, jenk...@openstack.org wrote:
> >>> Build failed.
> >>>
> >>> - glance-docs-ubuntu-xenial 
> >>> http://logs.openstack.org/38/38f199507aff8bfcaf81ad9ea58ea326224faf5f/release/glance-docs-ubuntu-xenial/de7d73e/
> >>>   
> : FAILURE in 1m 44s
> >>
> >> This boils down to [1] which is a known problem with newer cryptography 
> >> (and
> >> the interaction with openssl). What I don't understand is how we got there
> >> with constratints working[2]. Perhaps it's the openssl on the release 
> >> sigining
> >> node is "newer" than general nodepool nodes?
> >
> > glance does not use constraints in venv environment.
> >
> > It can be used since a few months. I'll send a change for master,
>  
> I expect this needs backporting to stable branches - stable or glance
> team, please review and backport yourself:
>  
> https://review.openstack.org/409642


Thank you Andreas!

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [craton] Meetings for 2016 Dec 19, 2016 Dec 26, and 2017 Jan 2 cancelled

2016-12-12 Thread Ian Cordasco
Hi Cratoneers! (Cratonistas? Cratoners? Cratons? I can never remember
what name we chose for ourselves...)

Most of the team will be taking holiday for some (if not all) of the
next two weeks, so we've decided to cancel the meetings scheduled for
then. Further, most businesses seem to be observing New Years on 2017
Jan 2 (meaning people will still be on vacation).

The next meeting will be 2017 Jan 9.

Happy New Year and Happy Holidays!
-- 
Ian Cordasco
Craton Core Reviewer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Pradeep Kilambi
On Mon, Dec 12, 2016 at 8:58 AM, Giulio Fidente  wrote:

> On 12/12/2016 02:51 PM, Giulio Fidente wrote:
>
>> On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:
>>
>>> Hey Folks:
>>>
>>> I would like to get some thoughts on $Subject. This came up when i was
>>> discussing the standalone roles for telemetry. Currently when we deploy
>>> redis in tripleo, its a pacemaker managed service. So if we were to
>>> deploy telemetry services on a dedicated node we could. But redis will
>>> have to be on a another node? (assuming we dont want to pull in
>>> pacemaker on to telemetry nodes).
>>>
>>
>> currently redis instances are not configured as a redis cluster but use
>> the master/slave replication model instead and pacemaker is taking care
>> of electing/relocating the redis master as needed
>>
>> there shouldn't be any dependency on the redis profile for the telemetry
>> roles, they should instead just point at the redis_vip
>>
>> the redis_vip is always guaranteed (by haproxy) to point to the redis
>> master
>>
>> With most services moved out of pacemaker in Newton, I think its time to
>>> move redis as well? Are there any constraints in moving redis to be
>>> managed by systemd? Looking at how we do it, It should be easily movable
>>> to systemd? Can we consider doing this for Ocata?
>>>
>>
>> I think we could look at using the redis cluster which allows multiple
>> masters, but I am not sure this can happen in Ocata ... yet again, there
>> shouldn't be in the telemetry roles any dependency on redis itself
>>
>> if we were to use the cluster mode the only difference would probably be
>> that the redis_vip will start balancing requests across the nodes
>>
>
> also note that the cluster model is not replicating data across the nodes,
> but rather sharding the data, with the promise of surviving failures of
> some of the nodes and I'm not sure we want that anyway
>
> https://redis.io/topics/cluster-spec


 I just thought we could use redis sentinel to achieve the HA and avoid
pacemaker headache. Which i think is already integrated a while ago in
tripleo. Its just not used, As pacemaker is taking over the control.

See https://redis.io/topics/sentinel



>
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Duncan Thomas
On 12 December 2016 at 14:55, Andreas Jaeger  wrote:

>
> So, what are the steps forward here? Requiring a non-free library like
> drbdmanage is not acceptable AFAIU,
>

This is pretty much where things went dead at the summit - there were
various degrees of unacceptability (I was personally bothered by the the
parts that can't be freely redistributed, rather than free software per se,
but that still leaves a large number of problem cases. Few people were
willing to seriously consider pulling 1/3 of the cinder drivers out, and
there was not AFAICT a firm conclusion.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Andreas Jaeger
On 2016-12-12 15:46, Sean McGinnis wrote:
> On Mon, Dec 12, 2016 at 11:00:41AM +, Duncan Thomas wrote:
>> It's a soft dependency, like most of the vendor specific dependencies - you
>> only need them if you're using a specific backend. We've loads of them in
>> cinder, under a whole bunch of licenses. There was a summit session
>> discussing it that didn't come to any firm conclusions.
> 
> 
> Here is the quick survey of libs I took leading up to the Summit:

Thanks.

Note this is now outdated - as Mehdi pointed out.

> https://etherpad.openstack.org/p/cinder-brick-driver-externals
> 
> There are quite a few soft requirements under various licenses.

So, what are the steps forward here? Requiring a non-free library like
drbdmanage is not acceptable AFAIU,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] No team meeting today - Dec 12th

2016-12-12 Thread Dougal Matthews
Hi all,

Todays meeting is cancelled as none of the Mistral team are available to
attend. If you have anything you would have liked to discuss please raise
it in a reply to this email.

As for me, I just have a couple of small patches that need second reviews:

https://review.openstack.org/408617
https://review.openstack.org/408515
https://review.openstack.org/407454

Thanks,
Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Sean McGinnis
On Mon, Dec 12, 2016 at 08:46:55AM -0600, Sean McGinnis wrote:
> On Mon, Dec 12, 2016 at 11:00:41AM +, Duncan Thomas wrote:
> > It's a soft dependency, like most of the vendor specific dependencies - you
> > only need them if you're using a specific backend. We've loads of them in
> > cinder, under a whole bunch of licenses. There was a summit session
> > discussing it that didn't come to any firm conclusions.
> 
> 
> Here is the quick survey of libs I took leading up to the Summit:
> 
> https://etherpad.openstack.org/p/cinder-brick-driver-externals
> 
> There are quite a few soft requirements under various licenses.
> 
> Sean
> 

Oh, I should note the bottom part is the driver libs I reviewed. The top
part has some additional os-brick information that Walt Boring compiled.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Sean McGinnis
On Mon, Dec 12, 2016 at 11:00:41AM +, Duncan Thomas wrote:
> It's a soft dependency, like most of the vendor specific dependencies - you
> only need them if you're using a specific backend. We've loads of them in
> cinder, under a whole bunch of licenses. There was a summit session
> discussing it that didn't come to any firm conclusions.


Here is the quick survey of libs I took leading up to the Summit:

https://etherpad.openstack.org/p/cinder-brick-driver-externals

There are quite a few soft requirements under various licenses.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Mehdi Abaakouk

On Mon, Dec 12, 2016 at 11:00:41AM +, Duncan Thomas wrote:

It's a soft dependency, like most of the vendor specific dependencies - you
only need them if you're using a specific backend. We've loads of them in
cinder, under a whole bunch of licenses. There was a summit session
discussing it that didn't come to any firm conclusions.


I have take a look to some other soft dependencies (I perhaps miss some):

pywbem: LGPLv2+
vmemclient: APACHE 2.0
hpe3parclient: APACHE 2.0
purestorage: BSD 2-Clause
rbd/rados:  LGPL 2.1

Their are all at least OSI Approved licenses

Anyway, I'm just sad to see that now two Opensource softwares (Cinder and
DRBD) need a non 'OSI Approved license' library to talk together.

Regards,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Requesting files from the overcloud from the undercloud

2016-12-12 Thread Steven Hardy
On Wed, Nov 30, 2016 at 01:54:34PM -0700, Alex Schultz wrote:
> Hey folks,
> 
> So I'm in the process of evaluating options for implementing the
> capture-environment-status-and-logs[0] blueprint.  At the moment my
> current plan is to implement a mistral workflow to execute the
> sosreport to bundle the status and logs up on the requested nodes.
> I'm leveraging a similar concept to the the remote execution[1] method
> we current expose via 'openstack overcloud execute'.  The issue I'm
> currently running into is getting the files off the overcloud node(s)
> so that they can be returned to the tripleoclient.  The files can be
> large so I don't think they are something that can just be returned as
> output from Heat.  So I wanted to ask for some input on the best path
> forward.
> 
> IDEA 1: Write something (script or utility) to be executed via Heat on
> the nodes to push the result files to a container on the undercloud.
> Pros:
> - The swift container can be used by the mistral workflow for other
> actions as part of this bundling
> - The tripleoclient will be able to just pull the result files
> straight from swift
> - No additional user access needs to be created to perform operations
> against the overcloud from the undercloud
> Cons:
> - Swift credentials (or token) need to be passed to the script being
> executed by Heat on the overcloud nodes which could lead to undercloud
> credentials being leaked to the overcloud

I think we can just use a swift tempurl?  That's in alignment for what we
already do both for polling metadata from heat (which is put into swift,
then we give a tempurl to the nodes, see /etc/os-collect-config.conf on the
overcloud nodes.

It's also well aligned with what we do for the DeployArtifactURLs
interface.

I guess the main difference here is we're only allowing GET access for
those cases, but here there's probably more scope for abuse, e.g POSTing
giant files from the overcloud nodes could impact e.g disk space on the
undercloud?

> - I'm not sure if all overcloud nodes would have access to the
> undercloud swift endpoint

I think they will, or the tempurl transport we use for heat won't work.

> IDEA 2: Write additional features into undercloud deployment for ssh
> key generation and inclusion into the deployment specifically for this
> functionality to be able to reach into the nodes and pull files out
> (via ssh).
> Pros:
> - We would be able to leverage these 'support' credentials for future
> support features (day 2 operations?)
> - ansible (or similar tooling) could be used to perform operations
> against the overcloud from the undercloud nodes
> Cons:
> - Complexity and issues around additional user access
> - Depending on where the ssh file transfer occurs (client vs mistral),
> additional network access might be needed.
> 
> IDEA 2a: Leverage the validations ssh key to pull files off of the
> overcloud nodes
> Pros:
> - ssh keys already exist when enable_validations = true so we can
> leverage existing
> Cons:
> - Validations can be disabled, possibly preventing 'support' features
> from working
> - Probably should not leverage the same key for multiple functions.
> 
> I'm leaning towards idea 1, but wanted to see if there was some other
> form of existing functionality I'm not aware of.

Yeah I think (1) is probably the way to go, although cases could be argued
for all approaches you mention.

My main reason for preferring (1) is I think we'll want the data to end up
in swift anyway, e.g so UI users can access it (which won't be possible if
we e.g scp some tarball from overcloud nodes into the undercloud filesystem
directly, so we may as well just push it into swift from the nodes?)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Giulio Fidente

On 12/12/2016 02:51 PM, Giulio Fidente wrote:

On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:

Hey Folks:

I would like to get some thoughts on $Subject. This came up when i was
discussing the standalone roles for telemetry. Currently when we deploy
redis in tripleo, its a pacemaker managed service. So if we were to
deploy telemetry services on a dedicated node we could. But redis will
have to be on a another node? (assuming we dont want to pull in
pacemaker on to telemetry nodes).


currently redis instances are not configured as a redis cluster but use
the master/slave replication model instead and pacemaker is taking care
of electing/relocating the redis master as needed

there shouldn't be any dependency on the redis profile for the telemetry
roles, they should instead just point at the redis_vip

the redis_vip is always guaranteed (by haproxy) to point to the redis
master


With most services moved out of pacemaker in Newton, I think its time to
move redis as well? Are there any constraints in moving redis to be
managed by systemd? Looking at how we do it, It should be easily movable
to systemd? Can we consider doing this for Ocata?


I think we could look at using the redis cluster which allows multiple
masters, but I am not sure this can happen in Ocata ... yet again, there
shouldn't be in the telemetry roles any dependency on redis itself

if we were to use the cluster mode the only difference would probably be
that the redis_vip will start balancing requests across the nodes


also note that the cluster model is not replicating data across the 
nodes, but rather sharding the data, with the promise of surviving 
failures of some of the nodes and I'm not sure we want that anyway


https://redis.io/topics/cluster-spec
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Giulio Fidente

On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:

Hey Folks:

I would like to get some thoughts on $Subject. This came up when i was
discussing the standalone roles for telemetry. Currently when we deploy
redis in tripleo, its a pacemaker managed service. So if we were to
deploy telemetry services on a dedicated node we could. But redis will
have to be on a another node? (assuming we dont want to pull in
pacemaker on to telemetry nodes).


currently redis instances are not configured as a redis cluster but use 
the master/slave replication model instead and pacemaker is taking care 
of electing/relocating the redis master as needed


there shouldn't be any dependency on the redis profile for the telemetry 
roles, they should instead just point at the redis_vip


the redis_vip is always guaranteed (by haproxy) to point to the redis master


With most services moved out of pacemaker in Newton, I think its time to
move redis as well? Are there any constraints in moving redis to be
managed by systemd? Looking at how we do it, It should be easily movable
to systemd? Can we consider doing this for Ocata?


I think we could look at using the redis cluster which allows multiple 
masters, but I am not sure this can happen in Ocata ... yet again, there 
shouldn't be in the telemetry roles any dependency on redis itself


if we were to use the cluster mode the only difference would probably be 
that the redis_vip will start balancing requests across the nodes

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Requesting files from the overcloud from the undercloud

2016-12-12 Thread Emilien Macchi
On Wed, Nov 30, 2016 at 3:54 PM, Alex Schultz  wrote:
> Hey folks,
>
> So I'm in the process of evaluating options for implementing the
> capture-environment-status-and-logs[0] blueprint.  At the moment my
> current plan is to implement a mistral workflow to execute the
> sosreport to bundle the status and logs up on the requested nodes.
> I'm leveraging a similar concept to the the remote execution[1] method
> we current expose via 'openstack overcloud execute'.  The issue I'm
> currently running into is getting the files off the overcloud node(s)
> so that they can be returned to the tripleoclient.  The files can be
> large so I don't think they are something that can just be returned as
> output from Heat.  So I wanted to ask for some input on the best path
> forward.
>
> IDEA 1: Write something (script or utility) to be executed via Heat on
> the nodes to push the result files to a container on the undercloud.
> Pros:
> - The swift container can be used by the mistral workflow for other
> actions as part of this bundling
> - The tripleoclient will be able to just pull the result files
> straight from swift
> - No additional user access needs to be created to perform operations
> against the overcloud from the undercloud
> Cons:
> - Swift credentials (or token) need to be passed to the script being
> executed by Heat on the overcloud nodes which could lead to undercloud
> credentials being leaked to the overcloud
> - I'm not sure if all overcloud nodes would have access to the
> undercloud swift endpoint

I'm in favor of prototyping idea 1 and see how we can resolve the
issue with credentials. We could eventually create a special and
dedicated account for these containers?
I think this is the simplest solution for now, let's see how it could work.

> IDEA 2: Write additional features into undercloud deployment for ssh
> key generation and inclusion into the deployment specifically for this
> functionality to be able to reach into the nodes and pull files out
> (via ssh).
> Pros:
> - We would be able to leverage these 'support' credentials for future
> support features (day 2 operations?)
> - ansible (or similar tooling) could be used to perform operations
> against the overcloud from the undercloud nodes
> Cons:
> - Complexity and issues around additional user access
> - Depending on where the ssh file transfer occurs (client vs mistral),
> additional network access might be needed.
>
> IDEA 2a: Leverage the validations ssh key to pull files off of the
> overcloud nodes
> Pros:
> - ssh keys already exist when enable_validations = true so we can
> leverage existing
> Cons:
> - Validations can be disabled, possibly preventing 'support' features
> from working
> - Probably should not leverage the same key for multiple functions.
>
> I'm leaning towards idea 1, but wanted to see if there was some other
> form of existing functionality I'm not aware of.
>
> Thanks,
> -Alex
>
> [0] 
> https://blueprints.launchpad.net/tripleo/+spec/capture-environment-status-and-logs
> [1] https://blueprints.launchpad.net/tripleo/+spec/remote-execution
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Emilien Macchi
On Fri, Dec 9, 2016 at 10:49 AM, Pradeep Kilambi  wrote:
> Hey Folks:
>
> I would like to get some thoughts on $Subject. This came up when i was
> discussing the standalone roles for telemetry. Currently when we deploy
> redis in tripleo, its a pacemaker managed service. So if we were to deploy
> telemetry services on a dedicated node we could. But redis will have to be
> on a another node? (assuming we dont want to pull in pacemaker on to
> telemetry nodes).
>
> With most services moved out of pacemaker in Newton, I think its time to
> move redis as well? Are there any constraints in moving redis to be managed
> by systemd? Looking at how we do it, It should be easily movable to systemd?
> Can we consider doing this for Ocata?
>
> Thoughts?

If I understand correctly the work that is being done in Composable
HA, Redis will be able to be deployed with Pacemaker on a dedicated
node:
https://review.openstack.org/#/c/362389

I'm not sure if we can deploy it with systemd only, in a HA manner but
Michele could eventually confirm.

Regarding the timeline to do it in Ocata, I'm really not sure that's
doable, it depends on the Composable HA progress I guess.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] Meeting Tuesday Dec 13 at 09:00 UTC

2016-12-12 Thread Pierre Riteau
Hello everyone,

The Blazar team will hold its weekly meeting tomorrow December 13 at 09:00 UTC 
in channel #openstack-meeting-alt.

The agenda is available at 
https://wiki.openstack.org/wiki/Meetings/Blazar#Agenda_for_December_13_2016
Feel free to add other agenda items and/or update the Etherpad that we have 
been using to track our progress.

Best regards,
Pierre


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] how to use mock driver

2016-12-12 Thread Rosensweig, Elisha (Nokia - IL)
Hi,


· In Vitrage Datasources, we can have a different input format for 
snapshots and updates. Thus, we need a different JSON file for each.

· Also, as part of the Mock feature, we need to support (for each 
resource) things that will be static, such as it’s name, and things that change 
over time, such as timestamps. We support this partially via different JSON 
files. In general, the dynamic file (marked with “D”) overwrites the static one 
(marked with “S”).

· In the code you can further inject specific fields you want to have 
for a specific test, in addition to the JSON files. See examples in 
test_scenario_evaluator.py.

Elisha

From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
Sent: Monday, December 12, 2016 8:23 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [vitrage] how to use mock driver

Is there any documentation on how to use mock driver for unit testing?

It seems it generates fake events from json spec but what is the different 
between

- `xxx_snapshot_X.json` and `xxx_dynamic_X.json`
- `xxx_S` and `xxx_D`

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptls][tc][goals] community goals for Pike

2016-12-12 Thread Emilien Macchi
On Tue, Nov 29, 2016 at 7:39 PM, Emilien Macchi  wrote:
> A few months ago, our community started to find and work on
> OpenStack-wide goals to "achieve visible common changes, push for
> basic levels of consistency and user experience, and efficiently
> improve certain areas where technical debt payments have become too
> high – across all OpenStack projects".
>
> http://governance.openstack.org/goals/index.html
>
> We started to define a first Goal in Ocata (Remove Copies of Incubated
> Oslo Code) and we would like to move forward in Pike.
> I see 3 actions we could take now:
>
> 1) Collect feedback of our first iteration of Community Goals in
> OpenStack during Ocata. What went well? What was more challenging?
>
> Some examples:
> - should we move the goal documents into a separate repo to allow a
> shorter review time, where we could just have 2 TC members approve
> them instead of waiting a week?
> -  we expected all teams to respond to all goals, even if they have no
> work to do. Should we continue that way?
> - should we improve the guidance to achieve Goals?
>
> I created an etherpad if folks want to give feedback:
> https://etherpad.openstack.org/p/community-goals-ocata-feedback
>
> 2) Goals backlog - https://etherpad.openstack.org/p/community-goals
> - new Goals are highly welcome.
> - each Goal would be achievable in one cycle, if not I think we need
> to break it down into separated Goals (with connections).
> - some Goals already have a team (ex: Python 3) but some haven't.
> Maybe could we dress a list of people able to step-up and volunteer to
> help on these ones.
> - some Goals might require some documentation for how to achieve it.
>
> I think for now 2) can be discussed on the etherpad, though feel free
> to propose another channel.
>
> 3) Choose Goals for Pike.
> Some of us already did, but we might want to start looking at what
> Goals we would like to achieve during Pike cycle.
> I was thinking at giving a score to the Goals, that could be
> calculated by its priority (I know it's vague but we know what is
> really urgent for us versus what can wait 6 months); but also the
> number of people who are interested to contribute on a Goal (if this
> Goal doesn't have a team yet).
> For now, openstack/governance is the repository for Goals, please
> propose them here.
>
>
> Please give feedback, we're doing iterations here, and hopefully we'll
> improve our Community Goals over the next cycles.
> Thanks for your time,

Two weeks happened, here's a digest version of the etherpad:

- Most of projects achieved the goal for Ocata, and we saw strong
interest to do it on time
- Some confusion between the ACK'ing of a goal, and actually doing the work.
- Some projects were slow on the uptake (of starting the work) and
even reviewing the patches.
- For now, keep using openstack/governance repo for documenting Goals.
- Improve guidance on what projects are expected to do when updating
the status of the Goal.
- For each Goal, document who the "guides" are and how to find them
when help is needed.
- It seems like achieving multiple Goals in a single cycle wouldn't be
possible for all teams, we could prioritize them to let teams achieve
more than one Goal within a cycle.

What's next?
https://etherpad.openstack.org/p/community-goals
Now that we have a good set of Goals that are proposed in this
etherpad, we might want to rank them by priority (1 is the most
important). Feel free to do it in the etherpad, by putting a rank in
"Priority rank".

Also, I've noticed some Goals might be too big to be achievable within
a single cycle and might need to be split (Rolling upgrades for
example). If you're author of one these goals, please do so.
I hope we can start defining Pike Goals by next week, so we can start
documenting what we would expect and the guidance to achieve it/them.

Any feedback is welcome,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][heat][mistral][magnum] Manage Ironic resource in Orchestration.

2016-12-12 Thread Pavlo Shchelokovskyy
Hi all,

On Mon, Dec 12, 2016 at 1:17 PM, Steven Hardy  wrote:

> On Mon, Dec 12, 2016 at 04:18:59PM +0800, Rico Lin wrote:
> >Think about bare metal (ironic) deployment, we can do directly ironic
> >call, or use nova to deploy.
> >Right now I have a spec about implemented Ironic resources [1] in
> Heat.
> >Including Chassis, Port and some state setting feature for Node
> >(including node-set-xxx). In heat team discussion, we tend to not
> >implement Node deployment feature.
> >
> >If we counting Ironic action into three parts:
> >1. Create the node in the ironic DB
> >2. Do a series of steps to get the node in a state where it's ready
> for
> >deployment (this may include introspection via ironic-inspector, or
> just
> >changing the state)
> >3. Select a node and drive a workflow that deploys an image on it.
>
> Yes, I think a heat resource is a good fit for (1), but less so for (2) and
> (3), because these are really workflows.  I started looking into ways of
> automating those workflows via mistral here (WIP, needs more work):
>
> https://review.openstack.org/#/c/313048/
>
> So what I would like is to finish that deployment workflow, then have some
> way to drive it via heat, e.g:
>
> resources:
>   the_node:
> type: OS::Ironic::Node
> properties:
>   
>
>   node_deploy_workflow:
> type: OS::Mistral::Workflow
> properties:
>   input:
> node: {get_resource: the_node}
> tasks:
>   
>
> There are two missing pieces (other than fixing the workflow example I
> referenced above):
>
> 1. The Ironic node heat resource
> 2. Some way to actually execute the mistral workflow during the stack
> create


> I think (1) is pretty simple, I wrote some resources that do that
> previously ref https://review.openstack.org/#/c/104223/ - but that effort
> stalled because at that time we didn't have a good answer to how we'd drive
> the deployment workflow (IMO we do now, it's mistral).
>
> The missing part for (2) is that currently OS::Mistral::Workflow expects a
> signal to actually create a workflow execution and run the workflow.  I
> think we need some other option (a boolean that says run once when we
> create the resource perhaps?) to make it more convenient to drive a one-off
> workflow during a stack create/update in a synchronous way.
>

Not sure it it would be relevant, but Ironic now has support for
notifications sent over RPC bus (and there is an ongoing implementation for
SearchLight plugin in this regard). In particular, notifications can be
enabled on node changing its provisioning state, so any system that can
listen to those notifications can be configured/taught to react on node
reaching the ACTIVE state.


>
> >We can do in Heat is to use Nova on (3) and hope someone already
> >handles on (1) and (2). If we consider (1) and (2) also part of Heat
> >resources, we can actually make entire management done in heat
> template.
>
> While we could implement the workflow directly in the heat resources, IMO
> it'd be nice to consider mistral instead, unless there are objections to
> making that a dependency when solving this problem.
>
> I actually think Nova is much less interesting for many Ironic use-cases,
> e.g for TripleO all we use nova for is to schedule to groups of nodes using
> a very simple filter.  It'd be very nice to remove any dependency on Nova
> and just drive deployments with explicit node placement directly via Ironic
> (either via Heat, or Heat->Mistral, or just Mistral depending on your
> preferences).
>
> >The use case in my head was ironic+magnum case:
> >Ironic resource handles state we need, then through magnum resource,
> nova
> >will deploy that baremetal node and config it as part of COE.
> >The open question is if heat really implemented such feature, who
> will
> >benefit from it and how are they going to use it? We certainly don't
> want
> >to implement something that no one will use it or not even think it's
> a
> >good idea.
>
> I think TripleO definitely would benefit from this work too - I'd just like
> to see it done in a way which makes depending on Nova optional (it's a
> major overhead, and for some baremetal deployment use-cases, it's not
> providing much value).
>
> >And which projects might be a good fit if it's not a good idea to do
> in
> >heat?
> >We can also think about the possibility of implement it with putting
> it in
> >Nova resource in heat if it's a baremetal case, Heat+Mistral, or just
> >Mistral will do.
>
> My preference would be Heat+Mistral as discussed above, but open to other
> ideas.
>
> I don't think conflating any of this with the Nova resource is a good idea
> - if we decide to implement the workflow directly in heat as an alternative
> to depending on Mistral it should probably be a new resource, perhaps
> implemented with a properties schema that makes overridding the 

Re: [openstack-dev] [ironic][heat][mistral][magnum] Manage Ironic resource in Orchestration.

2016-12-12 Thread Steven Hardy
On Mon, Dec 12, 2016 at 04:18:59PM +0800, Rico Lin wrote:
>Think about bare metal (ironic) deployment, we can do directly ironic
>call, or use nova to deploy.
>Right now I have a spec about implemented Ironic resources [1] in Heat.
>Including Chassis, Port and some state setting feature for Node
>(including node-set-xxx). In heat team discussion, we tend to not
>implement Node deployment feature.
>
>If we counting Ironic action into three parts:
>1. Create the node in the ironic DB
>2. Do a series of steps to get the node in a state where it's ready for
>deployment (this may include introspection via ironic-inspector, or just
>changing the state)
>3. Select a node and drive a workflow that deploys an image on it.

Yes, I think a heat resource is a good fit for (1), but less so for (2) and
(3), because these are really workflows.  I started looking into ways of
automating those workflows via mistral here (WIP, needs more work):

https://review.openstack.org/#/c/313048/

So what I would like is to finish that deployment workflow, then have some
way to drive it via heat, e.g:

resources:
  the_node:
type: OS::Ironic::Node
properties:
  

  node_deploy_workflow:
type: OS::Mistral::Workflow
properties:
  input:
node: {get_resource: the_node}
tasks:
  

There are two missing pieces (other than fixing the workflow example I
referenced above):

1. The Ironic node heat resource
2. Some way to actually execute the mistral workflow during the stack create

I think (1) is pretty simple, I wrote some resources that do that
previously ref https://review.openstack.org/#/c/104223/ - but that effort
stalled because at that time we didn't have a good answer to how we'd drive
the deployment workflow (IMO we do now, it's mistral).

The missing part for (2) is that currently OS::Mistral::Workflow expects a
signal to actually create a workflow execution and run the workflow.  I
think we need some other option (a boolean that says run once when we
create the resource perhaps?) to make it more convenient to drive a one-off
workflow during a stack create/update in a synchronous way.

>We can do in Heat is to use Nova on (3) and hope someone already
>handles on (1) and (2). If we consider (1) and (2) also part of Heat
>resources, we can actually make entire management done in heat template.

While we could implement the workflow directly in the heat resources, IMO
it'd be nice to consider mistral instead, unless there are objections to
making that a dependency when solving this problem.

I actually think Nova is much less interesting for many Ironic use-cases,
e.g for TripleO all we use nova for is to schedule to groups of nodes using
a very simple filter.  It'd be very nice to remove any dependency on Nova
and just drive deployments with explicit node placement directly via Ironic
(either via Heat, or Heat->Mistral, or just Mistral depending on your
preferences).

>The use case in my head was ironic+magnum case:
>Ironic resource handles state we need, then through magnum resource, nova
>will deploy that baremetal node and config it as part of COE.
>The open question is if heat really implemented such feature, who will
>benefit from it and how are they going to use it? We certainly don't want
>to implement something that no one will use it or not even think it's a
>good idea.

I think TripleO definitely would benefit from this work too - I'd just like
to see it done in a way which makes depending on Nova optional (it's a
major overhead, and for some baremetal deployment use-cases, it's not
providing much value).

>And which projects might be a good fit if it's not a good idea to do in
>heat?
>We can also think about the possibility of implement it with putting it in
>Nova resource in heat if it's a baremetal case, Heat+Mistral, or just
>Mistral will do.

My preference would be Heat+Mistral as discussed above, but open to other
ideas.

I don't think conflating any of this with the Nova resource is a good idea
- if we decide to implement the workflow directly in heat as an alternative
to depending on Mistral it should probably be a new resource, perhaps
implemented with a properties schema that makes overridding the normal nova
server resource easy.

I still think Heat+Mistral provides a cleaner solution tho, so I'd like to
see that further explored before committing to an internal reimplementation
of such workflow.

Thanks for reviving this topic - I'm certainly interested in helping move
this forward and/or discussing further.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Duncan Thomas
It's a soft dependency, like most of the vendor specific dependencies - you
only need them if you're using a specific backend. We've loads of them in
cinder, under a whole bunch of licenses. There was a summit session
discussing it that didn't come to any firm conclusions.

On 12 December 2016 at 10:52, Thierry Carrez  wrote:

> Mehdi Abaakouk wrote:
> > I have recently seen that drbdmanage python library is no more GPL2 but
> > need a end user license agreement [1].
> > Is this compatible with the driver policy of Cinder ?
>
> It's not acceptable as a dependency of an OpenStack project (be it GPLv2
> or using a custom EULA), see:
>
> https://governance.openstack.org/tc/reference/licensing.html
>
> That said, it doesn't seem to be listed as a Cinder requirement right
> now ? Is it a new dependency being considered, or is it currently flying
> under the radar ?
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Mehdi Abaakouk

On Mon, Dec 12, 2016 at 11:52:50AM +0100, Thierry Carrez wrote:

That said, it doesn't seem to be listed as a Cinder requirement right
now ? Is it a new dependency being considered, or is it currently flying
under the radar ?


I think this is because this library is not available on Pypi.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Andreas Jaeger
On 2016-12-12 11:52, Thierry Carrez wrote:
> Mehdi Abaakouk wrote:
>> I have recently seen that drbdmanage python library is no more GPL2 but
>> need a end user license agreement [1].
>> Is this compatible with the driver policy of Cinder ?
> 
> It's not acceptable as a dependency of an OpenStack project (be it GPLv2
> or using a custom EULA), see:
> 
> https://governance.openstack.org/tc/reference/licensing.html
> 
> That said, it doesn't seem to be listed as a Cinder requirement right
> now ? Is it a new dependency being considered, or is it currently flying
> under the radar ?
> 

AFAIU:
http://git.openstack.org/cgit/openstack/cinder/tree/cinder/volume/drivers/drbdmanagedrv.py

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Thierry Carrez
Mehdi Abaakouk wrote:
> I have recently seen that drbdmanage python library is no more GPL2 but
> need a end user license agreement [1].
> Is this compatible with the driver policy of Cinder ?

It's not acceptable as a dependency of an OpenStack project (be it GPLv2
or using a custom EULA), see:

https://governance.openstack.org/tc/reference/licensing.html

That said, it doesn't seem to be listed as a Cinder requirement right
now ? Is it a new dependency being considered, or is it currently flying
under the radar ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New comer to Openstack.

2016-12-12 Thread Thierry Carrez
Gary Kotton wrote:
> Welcome aboard. Please see http://www.openstack.org/. A lot of the
> projects have developer guidelines. Maybe start there…

Also see: https://wiki.openstack.org/wiki/Main_Page for a list of
upstream resources.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-12 Thread Thierry Carrez
John Villalovos wrote:
> So how about:
> - we enable an #openstack-meeting-5 to instantly relieve scheduling
> pressure
> 
> 
> Any reason it isn't #openstack-meeting-2 ?
> 
> The -2 channel is owned by openstackinfra.

Currently we have:
 #openstack-meeting
 #openstack-meeting-alt
 #openstack-meeting-3
 #openstack-meeting-4

There was a thread in the past discussing renaming -alt to -2 but it
concluded that it was not worth the hassle (losing the 489 people
hanging there).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][heat][mistral][magnum] Manage Ironic resource in Orchestration.

2016-12-12 Thread Ligong LG1 Duan
I think it is a good idea to use OpenStack to manage the steps of 1) and 2).
Currently we are using Ansible playbook to implement 1) and 2) and then using 
Magnum to provision a container COE on baremetals. And it would be better if we 
use Heat or Mistral to implement the same but if we want to implement it in 
Heat, first we need to implement an IronicClient in Heat engine if there is no 
IronicClient in Heat. (I remember that there was a discussion on IronicClient 
in Heat but not sure whether it has been implemented.)

Regards,
Ligong Duan

From: Rico Lin [mailto:rico.lin.gua...@gmail.com]
Sent: Monday, December 12, 2016 4:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ironic][heat][mistral][magnum] Manage Ironic resource 
in Orchestration.

Think about bare metal (ironic) deployment, we can do directly ironic call, or 
use nova to deploy.
Right now I have a spec about implemented Ironic resources [1] in Heat. 
Including Chassis, Port and some state setting feature for Node (including 
node-set-xxx). In heat team discussion, we tend to not implement Node 
deployment feature.

If we counting Ironic action into three parts:
1. Create the node in the ironic DB
2. Do a series of steps to get the node in a state where it's ready for 
deployment (this may include introspection via ironic-inspector, or just 
changing the state)
3. Select a node and drive a workflow that deploys an image on it.

We can do in Heat is to use Nova on (3) and hope someone already handles on (1) 
and (2). If we consider (1) and (2) also part of Heat resources, we can 
actually make entire management done in heat template.

The use case in my head was ironic+magnum case:
Ironic resource handles state we need, then through magnum resource, nova will 
deploy that baremetal node and config it as part of COE.

The open question is if heat really implemented such feature, who will benefit 
from it and how are they going to use it? We certainly don't want to implement 
something that no one will use it or not even think it's a good idea.
And which projects might be a good fit if it's not a good idea to do in heat?
We can also think about the possibility of implement it with putting it in Nova 
resource in heat if it's a baremetal case, Heat+Mistral, or just Mistral will 
do.
Any idea or use cases?


[1] https://review.openstack.org/#/c/393108
--
May The Force of OpenStack Be With You,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Dulko, Michal
On Mon, 2016-12-12 at 07:58 +0100, Mehdi Abaakouk wrote:

Hi,

I have recently seen that drbdmanage python library is no more GPL2 but
need a end user license agreement [1].

Is this compatible with the driver policy of Cinder ?

[1] 
http://git.drbd.org/drbdmanage.git/commitdiff/441dc6a96b0bc6a08d2469fa5a82d97fc08e8ec1



Issues with licensing are mostly around possibilities of including the official 
driver in any OpenStack distro. It seems to me that following statement in the 
new license prohibits that for drdbmanage:

3.4) Without prior written consent of LICENSOR or an authorized partner,
LICENSEE is not allowed to:



b) provide commercial turn-key solutions based on the LICENSED SOFTWARE or
commercial services for the LICENSED SOFTWARE or its modifications to any
third party (e.g. software support or trainings).

I think we need to collect feedback from distro vendors and DRDB team and then 
decide if we should remove the driver from Cinder.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] A question on creating Manila Share

2016-12-12 Thread Rico Lin
NP, I change the assignee to you, so others will get better knows who is
helping on fix it.

2016-12-12 16:56 GMT+08:00 zengchen :

> Rabi Mishra & Rico Lin:
> Thanks very much for your reply!
> I have submitted a bug report at https://bugs.launchpad.net/
> heat/+bug/1649217, and will submit a patch to fix it later.
>
> cheers
> zengchen
>
>
> 在 2016-12-09 12:45:01,"Rabi Mishra"  写道:
>
> Hi zengchen,
>
> Yeah, the constraint looks incorrect. Not sure if we got it wrong or
> manila has changed it afterwards. It would be good raise a bug/propose a
> fix.
>
>
> On Fri, Dec 9, 2016 at 8:26 AM, zengchen  wrote:
>
>> Hi, Heat stackers:
>> May I ask a question about creating Manila Share. I see Heat define
>> some constraints
>>  for property schema 'ACCESS_TYPE' at
>>  heat.engine.resources.openstack.manila.share.properties_
>> schema[ACCESS_RULES].
>>  I copy the codes as bellow. The allowed values for 'ACCESS_TYPE' are  'ip',
>> 'domain'.
>>
>> ACCESS_TYPE: properties.Schema(
>> properties.Schema.STRING,
>> _('Type of access that should be provided to guest.'),
>> constraints=[constraints.AllowedValues(
>> ['ip', 'domain'])],
>> required=True
>> ),
>>
>> However, I see manila has defined different allowed value for 'ACCESS_TYPE', 
>> which
>>
>>  includes 'ip', 'user', 'cert', 'cephx'. So, my question is does heat need 
>> some updates? Or
>>
>> do I miss something?  Hope for your reply. Thanks very much!
>>
>>
>> cheers
>>
>> zengchen
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Rabi Misra
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
May The Force of OpenStack Be With You,



*Rico LinChief OpenStack Technologist, inwinSTACK*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] A question on creating Manila Share

2016-12-12 Thread zengchen
Rabi Mishra & Rico Lin:
Thanks very much for your reply! 
I have submitted a bug report at 
https://bugs.launchpad.net/heat/+bug/1649217, and will submit a patch to fix it 
later.


cheers
zengchen



在 2016-12-09 12:45:01,"Rabi Mishra"  写道:

Hi zengchen,

Yeah, the constraint looks incorrect. Not sure if we got it wrong or manila has 
changed it afterwards. It would be good raise a bug/propose a fix.




On Fri, Dec 9, 2016 at 8:26 AM, zengchen  wrote:

Hi, Heat stackers:
May I ask a question about creating Manila Share. I see Heat define some 
constraints
 for property schema 'ACCESS_TYPE' at
 heat.engine.resources.openstack.manila.share.properties_schema[ACCESS_RULES].
 I copy the codes as bellow. The allowed values for 'ACCESS_TYPE' are  'ip', 
'domain'.
ACCESS_TYPE: properties.Schema(
properties.Schema.STRING,
_('Type of access that should be provided to guest.'),
constraints=[constraints.AllowedValues(
['ip', 'domain'])],
required=True
),
However, I see manila has defined different allowed value for 'ACCESS_TYPE', 
which
 includes 'ip', 'user', 'cert', 'cephx'. So, my question is does heat need some 
updates? Or
do I miss something?  Hope for your reply. Thanks very much!


cheers
zengchen



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--

Regards,
Rabi Misra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][badges][all] Docs gate broken for projects that include README.rst in docs

2016-12-12 Thread Flavio Percoco

On 11/12/16 13:32 +0100, Flavio Percoco wrote:

On 09/12/16 17:20 +0100, Flavio Percoco wrote:

Greetings,

Some docs jobs seem to be broken by the latest (or not?) docutils release. The
breakage seems to be related to the latest addition of the bages patch. The docs
generation doesn't like to have remote images. It used to be a warning but it
seems to have turned into an error now. While this is reported and fixed
upstream, we can workaround the issue by tagging the image as remote.

An example of this fix can be found here: 
https://review.openstack.org/#/q/topic:readme-badge-fix

Note that this is mostly relevant for projects that include the readme files in
their documentation. If your project doesn't do it, you can ignore this email.
That said, I'd recommend all projects to do it.



Apparently this "fix" doesn't render the image, which is far from the ideal
solution. Hang on while we find a better fix.


Ok, here's the actual "fix" for this issue. We're now skipping version 0.13.1 of
docutils as that's breaking several docs dates. if your project is using the
requirements constraints, you should not be hitting this issue. However, if your
project isn't using the upper constraints, then you may want to do something
similar to this[0][1].

This issue has been reported upstream [2]

[0] https://review.openstack.org/#/c/409630/
[1] https://review.openstack.org/#/c/409529/
[2] https://sourceforge.net/p/docutils/bugs/301/

Hope the above helps,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][heat][mistral][magnum] Manage Ironic resource in Orchestration.

2016-12-12 Thread Rico Lin
Think about bare metal (ironic) deployment, we can do directly ironic call,
or use nova to deploy.
Right now I have a spec about implemented Ironic resources [1] in Heat.
Including Chassis, Port and some state setting feature for Node (including
node-set-xxx). In heat team discussion, we tend to not implement Node
deployment feature.

If we counting Ironic action into three parts:
1. Create the node in the ironic DB
2. Do a series of steps to get the node in a state where it's ready for
deployment (this may include introspection via ironic-inspector, or just
changing the state)
3. Select a node and drive a workflow that deploys an image on it.

We can do in Heat is to use Nova on (3) and hope someone already handles on
(1) and (2). If we consider (1) and (2) also part of Heat resources, we can
actually make entire management done in heat template.

The use case in my head was ironic+magnum case:
Ironic resource handles state we need, then through magnum resource, nova
will deploy that baremetal node and config it as part of COE.

The open question is if heat really implemented such feature, who will
benefit from it and how are they going to use it? We certainly don't want
to implement something that no one will use it or not even think it's a
good idea.
And which projects might be a good fit if it's not a good idea to do in
heat?
We can also think about the possibility of implement it with putting it in
Nova resource in heat if it's a baremetal case, Heat+Mistral, or just
Mistral will do.
Any idea or use cases?


[1] https://review.openstack.org/#/c/393108
-- 
May The Force of OpenStack Be With You,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev