Re: [openstack-dev] [Heat] Heat template example repository

2017-05-12 Thread Foss Geek
Hi Lance, I am also interested to assisting you on this.

Thanks
Mohan
On 11-May-2017 2:25 am, "Lance Haig"  wrote:

> Hi,
>
> I would like to introduce myself to the heat team.
>
> My name is Lance Haig I currently work for Mirantis doing workload
> onboarding to openstack.
>
> Part of my job is assisting customers with using the new Openstack cloud
> they have been given.
>
> I recently gave a talk with a colleague Florin Stingaciu on LCM with heat
> at the Boston Summit.
>
> I am interested in assisting the project.
>
> We have noticed that there are some outdated examples in the heat-examples
> repository and I am not sure that they all still function.
>
> I was wondering if it would be valuable for me to take a look at these and
> fix them or perhaps we can rethink how we present the examples.
>
> I am interested in what you guys think.
>
> Thanks
>
> Lance
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] multiple vlan provider and tenant networks at the same time?

2017-05-12 Thread Ian Wells
There are two steps to how this information is used:

Step 1: create a network - the type driver config on the neutron-server
host will determine which physnet and VLAN ID to use when you create it.
It gets stored in the DB.  No networking is actually done, we're just
making a reservation here.  The network_vlan_ranges are used to work out
which VLANs can be used automatically for tenant networks (and if you
specify a provider network then the infrormation in the Neutron call is
just copied into the DB).

Step 2: bind a port - we look up the network in the DB to find all that
information out, then tell the OVS agent to attach a port to a specific
physnet and VLAN on a specific host.  The OVS agent on that host uses the
bridge_mappings to work out how to do that.  And note, we don't care
whether the network was created as a tenant or provider network at this
point - it's just a network.

On 12 May 2017 at 06:26,  wrote:

> [ml2_type_vlan]
> network_vlan_ranges = provider0,provider1,tenant-vla
> n2:200:299,tenant-vlan3:300:399
>

So here you're telling the controller (neutron-server) that two physical
networks, provider0 and provider1, exist that can only be used for VLAN
provider networks (because you haven't granted any ranges to neutron-server
for it to use automatically), and you've set up two physical networks with
VLAN ranges that Neutron may consume automatically for its tenant networks
(it will use VLANs out of the ranges you gave) *or* you can use for
provider networks (by specifying a VLAN using the provider network
properties when you create the Neutron network).  This tells Neutron
information it can use in step 1 above, at allocation time.

A side note: different physical networks *should* be wired to entirely
independent physical switches that are not connected to each other; that's
what they're designed for, networks that are physically separated.  That
said, I've seen cases where people do actually wire them together for
various reasons, like getting separate bandwidth through a different
interface for external networks.  If you do that you have to be careful
which VLANs you use for provider networks on your provider physnets -
Neutron will not be able to work out that provider0 vlan 200 and
tenant-vlan2 vlan 200 are actually the same network, for instance, if you
connect both uplinks to the same switch.

[ovs]
> bridge_mappings = provider0:br-ext0,provider1:br
> -ext1,tenant-vlan2:br-vlan2,tenant-vlan3:br-vlan3
>

The 'bridge_mappings' setting is for compute and network nodes, where you
plan on connecting things to the network.  It tells the OVS agent how to
get packets to and from a specific physical network.  It gets used for port
binding - step 2 - and *not* when networks are created.  You've specified a
means to get packets from all four of your physnets, which is normal.  It
doesn't say anything about how those physnets are used - it doesn't even
say they're used for VLANs, I could put a flat network on there if I wanted
- and it certainly doesn't say why those VLANs might have been chosen.

How can neutron decide on choosing correct vlan mapping for tenant? Will it
> try to pick provider0 if normal user creates a tenant network?
>

Neutron-server will choose VLANs for itself, when you create a network, and
when you don't hand over provider network properties.  And it will only
choose VLANs from the ranges you specified - so it will never choose a VLAN
automatically from the providerX physnets, given your configuration.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]

2017-05-12 Thread gordon chung


On 09/05/17 10:08 AM, simona marinova wrote:

>  The Alarming service doesn't work at this point. For example the
> command "ceilometer alarm-list" gives the error:
>
>

you should be using aodhclient if you have aodh.

>
> Now my biggest concern is that the Alarming service database
> (MySQL-based) and the Data collection service database (MongoDB) are not
> communicating properly. Is it possible for the Aodh to access the data
> from MongoDB?

the alarming database and the deprecated ceilometer db are not suppose 
to be communicating with each other so i'm not entirely sure what you 
are expecting. ceilometer meter-list grabs data from legacy ceilometer 
db, aodh alarm list grabs data from aodh.


-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] multi-site forum discussion

2017-05-12 Thread joehuang
Hello,

Neutron cells aware is not equal to multi-site. There are lots of multi-site 
deployment options, not limited to nova-cells, whether to use 
Neutron-cells/Nova-cells in multi-site deployments, it's up to cloud operator's 
choice. For the bug[3], it's reasonable to make neutron support cells, but it 
doesn't implicate that multi-site should mandatory adopt neutron-cells.

[3] https://bugs.launchpad.net/neutron/+bug/1690425

Best Regards
Chaoyi Huang (joehuang)

From: Armando M. [arma...@gmail.com]
Sent: 13 May 2017 3:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] multi-site forum discussion



On 12 May 2017 at 11:47, Morales, Victor 
> wrote:
Armando,

I noticed that Tricircle is mentioned there.  Shouldn’t be better to extend its 
current functionality or what are the things that are missing there?

Tricircle aims at coordinating independent neutron systems that exist in 
separated openstack deployments. Making Neutron cell-aware will work in the 
context of the same openstack deployment.


Regards,
Victor Morales

From: "Armando M." >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, May 12, 2017 at 1:06 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [neutron] multi-site forum discussion

Hi folks,

At the summit we had a discussion on how to deploy a single neutron system 
across multiple geographical sites [1]. You can find notes of the discussion on 
[2].

One key requirement that came from the discussion was to make Neutron more Nova 
cells friendly. I filed an RFE bug [3] so that we can move this forward on 
Lauchpad.

Please, do provide feedback in case I omitted some other key takeaway.

Thanks,
Armando

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18757/neutron-multi-site
[2] https://etherpad.openstack.org/p/pike-neutron-multi-site
[3] https://bugs.launchpad.net/neutron/+bug/1690425

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] multi-site forum discussion

2017-05-12 Thread Armando M.
On 12 May 2017 at 11:47, Morales, Victor  wrote:

> Armando,
>
>
>
> I noticed that Tricircle is mentioned there.  Shouldn’t be better to
> extend its current functionality or what are the things that are missing
> there?
>

Tricircle aims at coordinating independent neutron systems that exist in
separated openstack deployments. Making Neutron cell-aware will work in the
context of the same openstack deployment.


>
>
> Regards,
>
> Victor Morales
>
>
>
> *From: *"Armando M." 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Friday, May 12, 2017 at 1:06 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *[openstack-dev] [neutron] multi-site forum discussion
>
>
>
> Hi folks,
>
>
>
> At the summit we had a discussion on how to deploy a single neutron system
> across multiple geographical sites [1]. You can find notes of the
> discussion on [2].
>
>
>
> One key requirement that came from the discussion was to make Neutron more
> Nova cells friendly. I filed an RFE bug [3] so that we can move this
> forward on Lauchpad.
>
>
>
> Please, do provide feedback in case I omitted some other key takeaway.
>
>
>
> Thanks,
>
> Armando
>
>
>
> [1] https://www.openstack.org/summit/boston-2017/summit-
> schedule/events/18757/neutron-multi-site
>
> [2] https://etherpad.openstack.org/p/pike-neutron-multi-site
>
> [3] https://bugs.launchpad.net/neutron/+bug/1690425
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]

2017-05-12 Thread Armando M.
Hi folks,

At the summit we had a discussion on how to expand get-me-a-network [1]. A
few main points were collected during the session:

* Make get-me-a-network work with Horizon;
* Make get-me-a-network able to auto-assign floating IPs;
* Make get-me-a-network able to work with any network topology;
* Make get-me-a-network able to deal with NetworkAmbiguous error.

Link to bug reports [2,3,4,5] are below to move these forward on
Launchpad. Please,
do provide feedback in case I omitted some other key takeaway.

Thanks,
Armando

[1] https://etherpad.openstack.org/p/pike-neutron-gman
[2] https://bugs.launchpad.net/horizon/+bug/1690433
[3] https://bugs.launchpad.net/neutron/+bug/1042049
[4] https://bugs.launchpad.net/neutron/+bug/1690438
[5] https://bugs.launchpad.net/neutron/+bug/1690439
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] multi-site forum discussion

2017-05-12 Thread Morales, Victor
Armando,

I noticed that Tricircle is mentioned there.  Shouldn’t be better to extend its 
current functionality or what are the things that are missing there?

Regards,
Victor Morales

From: "Armando M." 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, May 12, 2017 at 1:06 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [neutron] multi-site forum discussion

Hi folks,

At the summit we had a discussion on how to deploy a single neutron system 
across multiple geographical sites [1]. You can find notes of the discussion on 
[2].

One key requirement that came from the discussion was to make Neutron more Nova 
cells friendly. I filed an RFE bug [3] so that we can move this forward on 
Lauchpad.

Please, do provide feedback in case I omitted some other key takeaway.

Thanks,
Armando

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18757/neutron-multi-site
[2] https://etherpad.openstack.org/p/pike-neutron-multi-site
[3] https://bugs.launchpad.net/neutron/+bug/1690425
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] diagnostics

2017-05-12 Thread Armando M.
Hi folks,

At the summit we had a forum session [1] to gather feedback on the current
diagnostics proposal [2] and help the neutron developer team drive the
first implementation of the API proposal.

Two main points were brought for discussion:

1) which diagnostics checks to provide to start with;
2) how to implement such checks.

Reachability seems to be on the top of the list, ie. the ability to
establish connectivity between two endpoints. The other one was to be
friendly to operators, in that they may not necessarily want to write
python code in order to add more checks to the platform. The idea is to
define on a set of variables to be exposed to the check function and let
this be written in any language of choice.

Please, do provide feedback in case I omitted some other key takeaway.

Thanks,
Armando

[1] https://etherpad.openstack.org/p/pike-neutron-diagnostics
[2]
http://specs.openstack.org/openstack/neutron-specs/specs/pike/diagnostics.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] multi-site forum discussion

2017-05-12 Thread Armando M.
Hi folks,

At the summit we had a discussion on how to deploy a single neutron system
across multiple geographical sites [1]. You can find notes of the
discussion on [2].

One key requirement that came from the discussion was to make Neutron more
Nova cells friendly. I filed an RFE bug [3] so that we can move this
forward on Lauchpad.

Please, do provide feedback in case I omitted some other key takeaway.

Thanks,
Armando

[1]
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18757/neutron-multi-site
[2] https://etherpad.openstack.org/p/pike-neutron-multi-site
[3] https://bugs.launchpad.net/neutron/+bug/1690425
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 18 & 19)

2017-05-12 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

The previous week's meeting was short and focused on transition, so I 
didn't send a summary for it. We also had a couple of daily sync 
meetings to discuss the ongoing work. Here's what happened in the last 
two weeks.


= Quickstart Transition Phase 2 Status =

As previously planned, we transitioned the ovb-ha and ovb-nonha jobs to 
run with Quickstart. Please read the details of it from the announcement 
email[1].


The job is performing really well over the previous days, check the 
statistics here[2]. The only problem is some pingtest failure which 
seems be to not a Quickstart but a TripleO bug[3].


We're still working on transitioning periodic and promotion jobs and 
started planning "phase3" which will include updates and upgrades jobs 
and the containerized undercloud job.


= Review Process Improvements =

Ronelle initiated a conversation about improving the speed of landing 
bigger features and changes in Quickstart. A recent example is the OVB 
mode for devmode.sh which is taking a long time to get merged. Ideas 
about the new process can be seen at this etherpad[4].


= Image hosting issues =

We had a discussion about hosting the pre-built images for Quickstart, 
which seems to be problematic recently and results in bad user 
experience for first time users.


We can't get the CentOS CDN to serve up-to-date consistent images, and 
we have capacity problems on images.rdoproject.org. The solution might 
be the new RDO Cloud, but for now we are considering having each job 
build the image by default. This could add some overhead but it might 
save time if the download is slow or headaches if the images are outdated.


Thank you for reading the summary. Have a good weekend!

Best regards,
Attila

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116568.html
[2] http://status-tripleoci.rhcloud.com/ and then click on 
"gate-tripleo-ci-centos-7-ovb-ha-oooq"

[3] https://bugs.launchpad.net/tripleo/+bug/1690109
[4] https://review.rdoproject.org/etherpad/p/rdoci-review-process

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] Upgrade CI job for O->P (containerization)

2017-05-12 Thread Jiří Stránský

On 12.5.2017 15:30, Emilien Macchi wrote:

On Wed, May 10, 2017 at 9:26 AM, Jiří Stránský  wrote:

Hi all,

the upgrade job which tests Ocata -> Pike/master upgrade (from bare-metal to
containers) just got a green flag from the CI [1].

I've listed the remaining patches we need to land at the very top of the
container CI etherpad [2], please let's get them reviewed and landed as soon
as we can. The sooner we get the job going, the fewer upgrade regressions
will get merged in the meantime (e.g. we have one from last week).

The CI job utilizes mixed release deployment (master undercloud, overcloud
deployed as Ocata and upgraded to latest). It tests the main overcloud
upgrade phase (no separate compute role upgrades, no converge phase). This
means the testing isn't exhaustive to the full expected "production
scenario", but it covers the most important part where we're likely to see
the most churn and potential breakages. We'll see how much spare wall time
we have to add more things once we get the job to run on patches regularly.


The job you and the team made to make that happen is amazing and outstanding.


Thanks! I should mention i've utilized quite a lot of pre-existing 
upgrades skeleton code in OOOQ that i think mainly Mathieu Bultel put 
together.



Once the jobs are considered stable, I would move them to the gate so
we don't break it. Wdyt?


Yes absolutely, i'd start with non-voting as usual first.

We still need reviews on the patches to move forward (4 patches 
remaining), so when you (plural :) ) have some bandwidth, please take a 
look:


https://review.openstack.org/#/c/462664/ - 
multinode-container-upgrade.yaml usable for mixed upgrade


https://review.openstack.org/#/c/459789/ - TripleO CI mixed release 
master UC / ocata OC


https://review.openstack.org/#/c/462172/  - Don't overwrite tht_dir for 
upgrades to master


https://review.openstack.org/#/c/460061/ - Use mixed release for 
container upgrade



Thanks

Jirka






Thanks and have a good day!

Jirka

[1]
http://logs.openstack.org/61/460061/15/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/d7faa50/
[2] https://etherpad.openstack.org/p/tripleo-containers-ci

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] demo: node auto-discovery with one power button click

2017-05-12 Thread Stig Telfer
Hi Dmitry - 

> On 11 May 2017, at 15:56, Dmitry Tantsur  wrote:
> 
> Hi all!
> 
> While people are enjoying the Forum, I also have something to show.
> 
> I've got a lot of questions about auto-discovery, so I've recorded a demo
> of it using TripleO Ocata: https://www.youtube.com/watch?v=wJkDxxjL3NQ.
> 
> Please let me know what you think!
> 
> Dmitry

Thanks for doing this, much appreciated.  If anyone’s interested, we’ve been 
using this neat capability on bare metal deployments to good effect.

We hit some knock-on consequences as a result of not wasting tedious hours 
manually commissioning our BMCs.  Fiddly stuff, like how to automate assigning 
BMC config after inspection, when BMCs come out of the factory all assigned the 
same default IP...

My colleague Mark and I wrote up our experience and solutions here:

http://www.stackhpc.com/ironic-idrac-ztp.html 


Share and enjoy,
Stig

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] demo: node auto-discovery with one power button click

2017-05-12 Thread Dmitry Tantsur

On 05/11/2017 07:28 PM, Jiri Tomasek wrote:



On 11.5.2017 16:56, Dmitry Tantsur wrote:

Hi all!

While people are enjoying the Forum, I also have something to show.

I've got a lot of questions about auto-discovery, so I've recorded a demo
of it using TripleO Ocata: https://www.youtube.com/watch?v=wJkDxxjL3NQ.

Please let me know what you think!

Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi, thanks for interesting demo! I am wondering how does this compare to 
"Discover nodes, knowing IP range for their BMCs and the default IPMI 
credentials" blueprint [1].


These are essentially unrelated approaches to doing node discovery.



I really like the power of introspection rules. TripleO UI can nicely benefit 
from this, letting user provide rules to apply on introspection.


Regarding the autodiscovery use case, is it possible to somehow power up the 
machines using API? What if user does not know the $IPMI_HOST?


Well, this particular use case is about manual or not supported power. E.g. you 
press the button or you use some fancy proprietary CMDB.




Can we replace enabling autodiscovery option in undercloud with discovering 
nodes as defined in [1] which would return json resembling to instackenv.json 
file and register the nodes using this? Does it make sense?


Why replace, if we can have both? :)



Can Ironic-inspector send messages via zaqar to notify subscribers about 
starting the autodiscovery?


This is a good idea, and Ironic is essentially capable of it (as soon as you set 
up Zaqar to listen to oslo.messaging topic). Inspector is not, that would be a 
good RFE.




[1] https://blueprints.launchpad.net/tripleo/+spec/node-discovery-by-range

Thanks
-- Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Validations before upgrades and updates

2017-05-12 Thread Emilien Macchi
On Mon, May 8, 2017 at 7:45 AM, Marios Andreou  wrote:
> Hi folks, after some discussion locally with colleagues about improving the
> upgrades experience, one of the items that came up was pre-upgrade and
> update validations. I took an AI to look at the current status of
> tripleo-validations [0] and posted a simple WIP [1] intended to be run
> before an undercloud update/upgrade and which just checks service status. It
> was pointed out by shardy that for such checks it is better to instead
> continue to use the per-service  manifests where possible like [2] for
> example where we check status before N..O major upgrade. There may still be
> some undercloud specific validations that we can land into the
> tripleo-validations repo (thinking about things like the neutron
> networks/ports, validating the current nova nodes state etc?).
>
> So do folks have any thoughts about this subject - for example the kinds of
> things we should be checking - Steve said he had some reviews in progress
> for collecting the overcloud ansible puppet/docker config into an ansible
> playbook that the operator can invoke for upgrade of the 'manual' nodes (for
> example compute in the N..O workflow) - the point being that we can add more
> per-service ansible validation tasks into the service manifests for
> execution when the play is run by the operator - but I'll let Steve point at
> and talk about those.

It looks like a good idea to me. I don't think our operators want to
update / upgrade OpenStack if the cloud is not in a consistent working
state before.

Here's the things we could test:
- Pacemaker cluster health
- Ceph health
- Database
- APIs healthcheck
- RabbitMQ health

> cheers, marios
>
> [0] https://github.com/openstack/tripleo-validations
> [1] https://review.openstack.org/#/c/462918/
> [2]
> https://github.com/openstack/tripleo-heat-templates/blob/stable/ocata/puppet/services/neutron-api.yaml#L197
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [CI] HA and non-HA OVB jobs are now running with Quickstart

2017-05-12 Thread Emilien Macchi
On Wed, May 10, 2017 at 11:11 AM, Sagi Shnaidman  wrote:
> Hi, all
> In addition to multinode jobs, we migrated today part of OVB jobs to use
> quickstart.
>
> We had before OVB ha and OVB nonha jobs and together with migrating them to
> use quickstart we merged them into one job. It's called now:
>  - gate-tripleo-ci-centos-7-ovb-ha-oooq
>
> and will be voting job instead of
>  - gate-tripleo-ci-centos-7-ovb-ha
>  - gate-tripleo-ci-centos-7-ovb-nonha
>
> The updates job "gate-tripleo-ci-centos-7-ovb-updates" stays the same and
> nothing was changed about it. The same is about periodic jobs, they stay the
> same and additional update will be sent when we migrate them too.
>
> In addition for tripleo-ci repository there are two branch jobs:
> - gate-tripleo-ci-centos-7-ovb-ha-oooq-newton
> - gate-tripleo-ci-centos-7-ovb-ha-oooq-ocata
> which replaces accordingly:
>  - gate-tripleo-ci-centos-7-ovb-ha-ocata
>  - gate-tripleo-ci-centos-7-ovb-nonha-ocata
>  - gate-tripleo-ci-centos-7-ovb-ha-newton
>  - gate-tripleo-ci-centos-7-ovb-nonha-newton
>
> A little about "gate-tripleo-ci-centos-7-ovb-ha-oooq" job:
> its features file is located in:
> https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/featureset001.yml
> and it's pretty similar to previous HA job, but in addition it has overcloud
> SSL and nodes introspection enabled (which were tested  in previous non-HA
> job).
>
> The old HA and non-HA jobs are moved into experimental queue and could run
> on the patch with "check experimental". It's done for regression check,
> please use it if you suspect there is a problem with migration.
>
> As usual you are welcome to ask any questions about new jobs and features in
> #tripleo . Tripleo-CI squad folks will be happy to answer you.

Except https://bugs.launchpad.net/tripleo/+bug/1690109 (which doesn't
sound related to quickstart) - I don't see any critical issue until
now.
The runtime looks similar with previous ovb-ha from a quick look, so I
think we're also good here.

Really good work, thanks for everyone involved in this effort!

> Thanks
>
> -- Forwarded message --
> From: Attila Darazs 
> Date: Wed, Mar 15, 2017 at 12:04 PM
> Subject: [openstack-dev] [tripleo] Gating jobs are now running with
> Quickstart
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
>
>
> As discussed previously in the CI Squad meeting summaries[1] and on the
> TripleO weekly meeting, the multinode gate jobs are now running with
> tripleo-quickstart. To signify the change, we added the -oooq suffix to
> them.
>
> The following jobs migrated yesterday evening, with more to come:
>
> - gate-tripleo-ci-centos-7-undercloud-oooq
> - gate-tripleo-ci-centos-7-nonha-multinode-oooq
> - gate-tripleo-ci-centos-7-scenario001-multinode-oooq
> - gate-tripleo-ci-centos-7-scenario002-multinode-oooq
> - gate-tripleo-ci-centos-7-scenario003-multinode-oooq
> - gate-tripleo-ci-centos-7-scenario004-multinode-oooq
>
> For those who are already familiar with Quickstart, we introduced two new
> concepts:
>
> - featureset config files that are numbered collection of settings, without
> node configuration[2]
> - the '--nodes' option for quickstart.sh and the config/nodes files that
> deal with only the number and type of nodes the deployment will have[3]
>
> If you would like to debug these jobs, it might be useful to read
> Quickstart's documentation[4]. We hope the transition will be smooth, but if
> you have problems ping members of the TripleO CI Squad on #tripleo.
>
> Best regards,
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113724.html
> [2]
> https://docs.openstack.org/developer/tripleo-quickstart/feature-configuration.html
> [3]
> https://docs.openstack.org/developer/tripleo-quickstart/node-configuration.html
> [4] https://docs.openstack.org/developer/tripleo-quickstart/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Best regards
> Sagi Shnaidman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] Upgrade CI job for O->P (containerization)

2017-05-12 Thread Emilien Macchi
On Wed, May 10, 2017 at 9:26 AM, Jiří Stránský  wrote:
> Hi all,
>
> the upgrade job which tests Ocata -> Pike/master upgrade (from bare-metal to
> containers) just got a green flag from the CI [1].
>
> I've listed the remaining patches we need to land at the very top of the
> container CI etherpad [2], please let's get them reviewed and landed as soon
> as we can. The sooner we get the job going, the fewer upgrade regressions
> will get merged in the meantime (e.g. we have one from last week).
>
> The CI job utilizes mixed release deployment (master undercloud, overcloud
> deployed as Ocata and upgraded to latest). It tests the main overcloud
> upgrade phase (no separate compute role upgrades, no converge phase). This
> means the testing isn't exhaustive to the full expected "production
> scenario", but it covers the most important part where we're likely to see
> the most churn and potential breakages. We'll see how much spare wall time
> we have to add more things once we get the job to run on patches regularly.

The job you and the team made to make that happen is amazing and outstanding.
Once the jobs are considered stable, I would move them to the gate so
we don't break it. Wdyt?

>
> Thanks and have a good day!
>
> Jirka
>
> [1]
> http://logs.openstack.org/61/460061/15/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/d7faa50/
> [2] https://etherpad.openstack.org/p/tripleo-containers-ci
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] multiple vlan provider and tenant networks at the same time?

2017-05-12 Thread vladislav . belogrudov

Hi,

I wonder how it is possible to setup multiple network interfaces / 
bridge mappings for VLAN tenants and providers at the same time. E.g. in 
case of 1 VLAN network for tenant and 1 external VLAN how should neutron 
bridge mapping work? User interface does not allow to specify the mapping.


Example: 2 external VLAN interfaces and 2 tenant ones. In this case 
neutron configuration would be:


[ml2_type_vlan]
network_vlan_ranges = 
provider0,provider1,tenant-vlan2:200:299,tenant-vlan3:300:399


[ovs]
bridge_mappings = 
provider0:br-ext0,provider1:br-ext1,tenant-vlan2:br-vlan2,tenant-vlan3:br-vlan3


How can neutron decide on choosing correct vlan mapping for tenant? Will 
it try to pick provider0 if normal user creates a tenant network?


Thanks,
Vladislav

PS. Similar question could not be answered in [openstack]


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] magnum cluster-create for kubernetes-template was failed.

2017-05-12 Thread Mark Goddard
Hi,

I also hit the loopingcall error while running magnum 4.1.1 (ocata). It is
tracked by this bug: https://bugs.launchpad.net/magnum/+bug/1666790. I
cherry picked the fix to ocata locally, but this needs to be done upstream
as well.

I think that the heat stack create timeout is unrelated to that issue
though. Try the following to debug the issue:
- Check the cluster's heat stack and its component resources.
- If created, SSH to the master and slave nodes, checking systemd services
are up and cloud-init succeeded.

Regards,
Mark

On 12 May 2017 at 05:57, KiYoun Sung  wrote:

> Hello,
> Magnum Team.
>
> I installed magnum on Openstack Ocata(by fuel 11.0).
> I referred to this guide.(https://docs.openstack.
> org/project-install-guide/container-infrastructure-managemen
> t/ocata/install.html)
>
> Below is my installation information.
> root@controller:~# dpkg -l | grep magnum
> magnum-api  4.1.0-0ubuntu1~cloud0
>  all  OpenStack containers as a service
> magnum-common   4.1.0-0ubuntu1~cloud0
>  all  OpenStack containers as a service - API server
> magnum-conductor4.1.0-0ubuntu1~cloud0
>  all  OpenStack containers as a service - conductor
> python-magnum   4.1.0-0ubuntu1~cloud0
>  all  OpenStack containers as a service - Python library
> python-magnumclient 2.5.0-0ubuntu1~cloud0
>  all  client library for Magnum API - Python 2.x
>
> After installation,
> I created cluster-template for kubernetes like this.
> (magnum cluster-template-create --name k8s-cluster-template \ --image
> fedora-atomic-latest \ --keypair testkey \ --external-network
> admin_floating_net \ --dns-nameserver 8.8.8.8 \ --flavor m1.small \
> --docker-volume-size 5 \ --network-driver flannel \ --coe kubernetes )
>
> and I create cluster,
> but "magnum clutser-create' command was failed.
> (magnum cluster-create --name k8s-cluster \ --cluster-template
> k8s-cluster-template \ --node-count 1 \ --timeout 10 )
>
> After 10 minutes(option "--timeout 10"),
> creation was failed, and the status is "CREATE_FAILED"
>
> I executed "openstack server list" command,
> there is a only kube-master instance.
> (root@controller:~# openstack server list
> +--+
> ---++---
> -+--+
> | ID   | Name
>  | Status | Networks   | Image Name   |
> +--+
> ---++---
> -+--+
> | bf9c5097-74fd-4457-a8a2-4feae76d4111 | k8-i27fw72w5t-0-i6lg6mzpzrl6-kube-
>| ACTIVE | private=10.0.0.9, 172.16.1.135 | fedora-atomic-latest |
> |  | master-ekjrg2v6ztss
> |||  |
> +--+
> ---+++--+
>
> )
>
> I think kube-master instance create is successful.
> I can connect that instance
> and docker container was running normally.
>
> Why this command was failed?
>
> Here is my /var/log/magnum/magnum-conductor.log and /var/log/nova-all.log.
> magnum-conductor.log have a ERROR.
> ===
> 2017-05-12 04:05:00.684 756 ERROR magnum.common.keystone
> [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Keystone API
> connection failed: no password, trust_id or token found.
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception
> [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Exception in string
> format operation, kwargs: {'code': 500}
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception Traceback (most
> recent call last):
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception   File
> "/usr/lib/python2.7/dist-packages/magnum/common/exception.py", line 92,
> in __init__
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception self.message
> = self.message % kwargs
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception KeyError:
> u'client'
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception
> 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall
> [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Fixed interval
> looping call 'magnum.service.periodic.ClusterUpdateJob.update_status'
> failed
> 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall Traceback (most
> recent call last):
> 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall   File
> "/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137,
> in _run_loop
> 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall result =
> func(*self.args, **self.kw)
> 2017-05-12 04:05:00.687 756 ERROR 

[openstack-dev] [networking-sfc] Load distribution bug with OVS driver

2017-05-12 Thread Bernard Cafarelli
Hi,

this is a follow-up/summary of launchpad bug 1675289 [0].

From the original spec [1], with the OVS driver we should have groups
witt the "hash" selection method, and for parameters source
IP/port/protocol

However this requires OpenFlow 1.5, so we actually have default
selection method and no parameters customization. Selection is done on
both source and destination [2], so in our case close to the
"theorical" parameters.

Additionally we can set "lb_fields" via API, but they are not currenly
used in the driver. These also sound quite OVS-specific, not sure how
other drivers handle it.

Anyway, I outlined the possible change to fix this properly in the bug
(if I got everything correctly).

Feedback and comments welcome!

[0] https://bugs.launchpad.net/networking-sfc/+bug/1675289
[1] 
https://github.com/openstack/networking-sfc/blob/master/doc/source/ovs_driver_and_agent_workflow.rst#group-table-flows
[2] 
https://github.com/openvswitch/ovs/commit/1d1aae0b2f9a149bf74000d975fbf29d133cd9ca

-- 
Bernard Cafarelli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev