Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-28 Thread Tomasz Napierala
Hi,

Just wondering what is fine result and decision? This change is pretty wide and 
impacts many dev (and users), I think we should be listening to the feedback 
before making any decision. 

Regards,


> On 17 Dec 2015, at 11:01, Artem Silenkov  wrote:
> 
> Hello! 
> We have merged 9.3 a week ago. From packaging team side downgrade is not an 
> option and was made by mistake.
> Regards 
> Artem Silenkov 
> ---
> MOS-PAckaging
> 
> 
> On Thu, Dec 17, 2015, 12:32 Oleg Gelbukh  wrote:
> In fact, it seems that 9.2 is in the mix since the introduction of centos7. 
> Thus, all tests that have been made since then are made against 9.2. So, 
> upgrading it to 9.3 actually is a change that has to be blocked by FF/SCF.
> 
> Just my 2c.
> 
> --
> Best regards,
> Oleg Gelbukh
> 
> On Thu, Dec 17, 2015 at 12:13 PM, Evgeniy L  wrote:
> Hi Andrew,
> 
> It doesn't look fair at all to say that we use Postgres specific feature for 
> no reasons
> or as you said "just because we want".
> For example we used Arrays which fits pretty well for our roles usage, which 
> improved
> readability and performance.
> Or try to fit into relational system something like that [1], I don't think 
> that we will get
> a good result.
> 
> P.S. sending a link to a holywar topic (schema vs schemaless), won't help to 
> solve our
> specific problem with Postgres downgrading vs keeping old (new) version.
> 
> [1] 
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml
> 
> 
> On Tue, Dec 15, 2015 at 10:53 PM, Andrew Maksimov  
> wrote:
> +1 to Igor suggestion to downgrade Postgres to 9.2. Our users don't work 
> directly with Postgres, so there is no any deprecation of Fuel features.
> Maintaining our own custom Postgres package just because we want "JSON 
> column" is not a rational decision. Come on, fuel is not a billing system 
> with thousands tables and special requirements to database. At least, we 
> should try to keep it simple and avoid unnecessary complication.
> 
> PS
>  BTW, some people suggest to avoid using  json columns, read [1] PostgreSQL 
> anti-patterns: unnecessary json columns.
> 
> [1] - 
> http://blog.2ndquadrant.com/postgresql-anti-patterns-unnecessary-jsonhstore-dynamic-columns/
> 
> Regards,
> Andrey Maximov
> Fuel Project Manager
> 
> 
> On Tue, Dec 15, 2015 at 9:34 PM, Vladimir Kuklin  wrote:
> Folks
> 
> Let me add my 2c here.
> 
> I am for using Postgres 9.3. Here is an additional argument to the ones 
> provided by Artem, Aleksandra and others.
> 
> Fuel is being sometimes highly customized by our users for their specific 
> needs. It has been Postgres 9.3 for a while and they might have as well 
> gotten used to it and assumed by default that this would not change. So some 
> of their respective features they are developing for their own sake may 
> depend on Postgres 9.3 and we will never be able to tell the fraction of such 
> use cases. Moreover, downgrading DBMS version of Fuel should be inevitably 
> considered as a 'deprecation' of some features our software suite is 
> providing to our users. This actually means that we MUST provide our users 
> with a warning and deprecation period to allow them to adjust to these 
> changes. Obviously, accidental change of Postgres version does not follow 
> such a policy in any way. So I see no other ways except for getting back to 
> Postgres 9.3.
> 
> 
> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky  
> wrote:
> Hey Mike,
> 
> Thanks for your input.
> 
> > actually not.  if you replace your ARRAY columns with JSON entirely,
> 
> It still needs to fix the code, i.e. change ARRAY-specific queries
> with JSON ones around the code. ;)
> 
> > there's already a mostly finished PR for SQLAlchemy support in the queue.
> 
> Does it mean SQLAlchemy will have one unified interface to make JSON
> queries? So we can use different backends if necessary?
> 
> Thanks,
> - Igor
> 
> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
> >
> >
> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> >> Hey Julien,
> >>
> >>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> >>
> >> I believe this blueprint is about DB for OpenStack cloud (we use
> >> Galera now), while here we're talking about DB backend for Fuel
> >> itself. Fuel has a separate node (so called Fuel Master) and we use
> >> PostgreSQL now.
> >>
> >>> does that mean Fuel is only going to be able to run with PostgreSQL?
> >>
> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
> >> way to tighten knots harder.
> >
> > actually not.  if you replace your ARRAY columns with JSON entirely,
> > MySQL has JSON as well now:
> > https://dev.mysql.com/doc/refman/5.7/en/json.html
> >
> > there's already a mostly finished PR 

Re: [openstack-dev] [Fuel][Bareon] Fuel & Bareon integration (fuel modularisation)

2015-12-28 Thread Tomasz Napierala
I agree with Evgeny: from work organization it would more optimal to have 2 
repos. API and system facing programming are completely different domains, 
requiring different skill sets. In my opinion separation would lower the entry 
barriers.

Regards,

> On 17 Dec 2015, at 15:53, Evgeniy L  wrote:
> 
> Hi Igor,
> 
> Bareon by itself doesn't have any REST interface, Bareon is basically 
> fuel_agent,
> which is framework + CLI wrapper to use it as an agent.
> In order to store and edit required entities in the database we need some 
> wrapper,
> which adds this functionality. This simple wrapper will be implemented in 
> Bareon-API.
> User should be able to use Bareon without any additional API/Database if 
> she/he
> wants to do some basic stuff without need to store the configuration, which 
> is not
> Fuel use case.
> If the question was specifically about Bareon-API in separate repo, there is 
> no
> reason to store it in a single repo, since we may have separate teams working
> on those sub-projects and those solve a bit different problems, user facing 
> api
> vs low level tools.
> 
> Thanks,
> 
> On Thu, Dec 17, 2015 at 5:33 PM, Igor Kalnitsky  
> wrote:
> > create Bareon-API repository, and start production ready implementation
> 
> For what reason do we need a separate repo? I thought API will be a
> part of bareon repo. Or bareon is just a provisioning agent, which
> will be driven by bareon-api?
> 
> On Thu, Dec 17, 2015 at 12:29 PM, Evgeniy L  wrote:
> > Hi,
> >
> > Some time ago, we’ve started a discussion [0] about Fuel modularisation
> > activity.
> > Due to unexpected circumstances POC has been delayed.
> >
> > Regarding to partitioning/provisioning system, we have POC with a demo [1]
> > (thanks to Sylwester), which shows how the integration of Fuel and Bareon
> > [2] can
> > be done.
> >
> > To summarise the implementation:
> > * we have a simple implementation of Bareon-API [3], which stores
> > partitioning
> >   related data and allows to edit it
> > * for Nailgun new extension has been implemented [4], which uses Bareon-API
> >   to store partitioning information, so we will be able to easily switch
> > between
> >   classic volume_manager implementation and Bareon-API extension
> > * when provisioning gets started, extensions retrieves the data from
> > Bareon-API
> >
> > Next steps:
> > * create Bareon-API repository, and start production ready implementation
> > * create a spec for Fuel project
> > * create a spec for Bareon project
> >
> > If you have any questions don’t hesitate to ask them in this thread, also
> > you can
> > find us on #openstack-bareon channel.
> >
> > Thanks!
> >
> > [0]
> > http://lists.openstack.org/pipermail/openstack-dev/2015-October/077025.html
> > [1] https://www.youtube.com/watch?v=GTJM8i7DL0w
> > [2]
> > http://lists.openstack.org/pipermail/openstack-dev/2015-December/082397.html
> > [3] https://github.com/Mirantis/bareon-api
> > [4] https://review.openstack.org/#/c/250864/
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Query about re-directing incoming traffic.

2015-12-28 Thread Chandra Mohan Babu Nadiminti
Have a look at extra-route extension.


http://developer.openstack.org/api-ref-networking-v2-ext.html#extraroute-ext


On Mon, Dec 28, 2015 at 9:10 AM, Vikram Choudhary  wrote:

>
>
> On Mon, Dec 28, 2015 at 10:20 PM, Jay Pipes  wrote:
>
>> On 12/28/2015 11:13 AM, Vikram Choudhary wrote:
>>
>>> Hi All,
>>>
>>> We want to redirect all / some specific incoming traffic to a particular
>>> neutron port, where a network function is deployed. [Network function
>>> could be DPI, IDS, Firewall, Classifier, etc]. In this regard, we have
>>> few queries:
>>>
>>> 1. How we can achieve this?
>>>
>>> 2. Do we have well-defined NBI's for such use-case?
>>>
>>
>> What are NBIs?
>>
> Vikram: Does neutron already support any API's for achieving this?
>
>
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards
Chandra Mohan Babu Nadiminti
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] no meeting tomorrow

2015-12-28 Thread Dan Prince
The next TripleO IRC meeting will be on January 5th 2016.

Thanks,

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Dragonflow] Atomic update doesn't work with etcd-driver

2015-12-28 Thread Li Ma
Hi Gal, you reverted this patch [1] due to the broken pipeline. Could
you provide some debug information or detailed description? When I run
my devstack, I cannot reproduce the sync problem.

[1] 
https://github.com/openstack/dragonflow/commit/f83dd5795d54e1a70b8bdec1e6dd9f7815eb6546

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Query about re-directing incoming traffic.

2015-12-28 Thread Vikram Choudhary
On Tue, Dec 29, 2015 at 2:34 AM, Chandra Mohan Babu Nadiminti <
nadiminti.chan...@gmail.com> wrote:

> Have a look at extra-route extension.
>
>
> http://developer.openstack.org/api-ref-networking-v2-ext.html#extraroute-ext
>
Vikram: IIUC, extra-route can only re-direct the traffic to a specific
destination by installing an additional route in the routing table. What we
want is the traffic should also follow it's normal forwarding path after it
gets processed by the NF (network function).


> On Mon, Dec 28, 2015 at 9:10 AM, Vikram Choudhary 
> wrote:
>
>>
>>
>> On Mon, Dec 28, 2015 at 10:20 PM, Jay Pipes  wrote:
>>
>>> On 12/28/2015 11:13 AM, Vikram Choudhary wrote:
>>>
 Hi All,

 We want to redirect all / some specific incoming traffic to a particular
 neutron port, where a network function is deployed. [Network function
 could be DPI, IDS, Firewall, Classifier, etc]. In this regard, we have
 few queries:

 1. How we can achieve this?

 2. Do we have well-defined NBI's for such use-case?

>>>
>>> What are NBIs?
>>>
>> Vikram: Does neutron already support any API's for achieving this?
>>
>>
>>> -jay
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards
> Chandra Mohan Babu Nadiminti
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas]some architectural advice on fwaas driver writing

2015-12-28 Thread Oguz Yarimtepe
After seeing that vYatta requires a driver plugged in to the interface, 
i gave up debugging it.


Now i am trying vArmour driver. Looks simpler. Many things are clearer 
except from that they have their own L3 agent. It sees it should be 
enabling API calls when a new router is added, removed or updated. I 
tried with a Liberty devstack environment but couldn't managed to fall 
to debug into line 
https://github.com/openstack/neutron-fwaas/blob/stable/liberty/neutron_fwaas/services/firewall/agents/varmour/varmour_router.py#L294


I tried adding a router and removing it. Each time when the code 
execution comes to the line 
https://github.com/openstack/neutron-fwaas/blob/stable/liberty/neutron_fwaas/services/firewall/agents/varmour/varmour_router.py#L278


the global agent code is executed and i couldn't find when the snat or 
floating ip functions are called.


Any idea?

I am also looking for the vArmour firewall software to test, but seems 
even for trial version it is not possible, since i applied from their 
site for a demo version, i couldn't get any return yet.


On 11/23/2015 08:25 AM, Germy Lure wrote:

Hi,
Under current FWaaS architecture or framework, only integrating 
hardware firewall is not easy. That requires neutron support service 
level multiple vendors. In another word, vendors must fit each other 
for their services while currently vendors just provides all services 
through controller.


I think the root cause is Neutron just doesn't known how the network 
devices connect each other.  Neutron provides FW, LB, VPN and other 
advanced network functionalists as services. But as the implementation 
layer, Neutron needs TOPO info to make right decision, routing traffic 
to the right device. For example, from namespace router to hardware 
firewall, Neutron should add some internal routes even extra L3 
interfaces according to the connection relationship between them. If 
the firewall service is integrated with router, like Vyatta, it's 
simple. The only thing you need to do is just enable the firewall itself.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Query about re-directing incoming traffic

2015-12-28 Thread Vikram Choudhary
Hi All,

We want to redirect all / some specific traffic incoming traffic to a
particular port where a network function is deployed. [Network function
could be DPI, IDS, Firewall, Classifier, etc]. In this regard, we have few
queries:

1. How this can be achieved?
2. Do we have well-defined NBI's for such use-case?

Any thought / suggestion will be appreciated.

Thanks
Vikram
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] whether the ServiceGroup in Cinder is necessary

2015-12-28 Thread Michał Dulko
On 12/28/2015 05:03 AM, hao wang wrote:
> hi, Janice
>
> This idea seems to me that is useful to detect the state of
> cinder-volume process more quickly, but I feel there is another issue
> that if the back-end device go to fail you still
> can't keep cloud in ha or create volume successfully since the service
> is up but device is down.
>
> So, what I want to say is we maybe need to consider to detect and
> report the device state priority[1] and then consider to improve
> service if we need that.
>
> [1]https://review.openstack.org/#/c/252921/

We're already doing something similar in terms of driver initialization
state [1]. c-vols with uninitialized drivers will show up as "down".
Your idea also seems to make sense to me.

https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L474-L481

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][designate] Regarding Designate install through Openstack-Ansible

2015-12-28 Thread Sharma Swati6

Hi All,

Thanks alot for your valuable feedback Jesse.

Point 1 :

I have made the appropriate Designate entry in the file here : 
/playbooks/defaults/repo_packages/openstack_services.yml and uploaded it for 
review here : 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/defaults/repo_packages/openstack_services.yml
Here, I have taken the 'designate_git_install_branch:' as the most recent one 
on '17.12.2015'

Point 2 :

The execution of tasks and handlers is very well explained in your answer. 
Thanks for that :)

Point 3 :

With regards to creating a DB user & DB, I have modeled the file from glance 
and placed it here: 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/roles/os_designate/tasks/designate_db_setup.yml

Point 4 :

I also raised that I am facing an error while running the playbook, for which I 
have pasted the results at http://paste.openstack.org/show/482171/ . On IRC, 
Jesse  recommended to attach the designate container first and 
check the internet connection. 
I did attach the new designate_container and pinged some address for the 
connectivity check. This works fine here, but I get the same error while 
running playbook. 
Any other probable cause? 

Once this is done, I will checkout the next step suggested by Jesse, i.e. to 
see the designate service in the Keystone service catalog and  interact with it 
via the CLI?

Please share your suggestions.

Thanks & Regards
Swati Sharma
System Engineer
Tata Consultancy Services
Mailto: sharma.swa...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting



-Jesse Pretorius  wrote: -
To: "OpenStack Development Mailing List (not for usage questions)" 

From: Jesse Pretorius 
Date: 12/17/2015 04:35PM
Cc: pandey.pree...@tcs.com, Partha Datta 
Subject: Re: [openstack-dev] Regarding Designate install through
Openstack-Ansible

Hi Swati,

It looks like you're doing well so far! In addition to my review feedback via 
IRC, let me try to answer your questions.

The directory containing the files which hold the SHA's is here:
https://github.com/openstack/openstack-ansible/tree/master/playbooks/defaults/repo_packages

Considering that Designate is an OpenStack Service, the appropriate entries 
should be added into this file:
https://github.com/openstack/openstack-ansible/blob/master/playbooks/defaults/repo_packages/openstack_services.yml

The order of the services is generally alphabetic, so Designate should be added 
after Cinder and before Glance.

I'm not sure I understand your second question, but let me try and respond with 
what I think you're asking. Assuming a running system with all the other 
components, and an available container for Designate, the workflow will be:

1 - you execute the os-designate-install.yml playbook.
2 - Ansible executes the pre-tasks, then the role at 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/os-designate-install.yml#L64
3 - Ansible then executes 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/roles/os_designate/tasks/main.yml
4 - Handlers are triggered when you notify them, for example: 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/roles/os_designate/tasks/designate_post_install.yml#L54

Does that help you understand how the tasks and handlers are included for 
execution? Does that answer your question?

With regards to creating a DB user & DB - as you've modeled the role from Aodh, 
which doesn't use Galera, you're missing that part An example you can model 
from is here: 
https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_glance/tasks/glance_db_setup.yml

Question 4 is a complex one, and I don't know enough about Designate to answer 
properly. From what I can see you're already doing the following in the role:

1 - preparing the host/container, RabbitMQ (and soon will be doing the DB) for 
a Designate deployment
2 - installing the apt and python packages required for Designate to be able to 
run
3 - placing down the config files and upstart scripts for Designate to run
4 - registering the Designate service endpoint

Once that's done, I'm not entirely sure what else needs to be done to make 
Designate do what it needs to do. At that point, are you able to see the 
service in the Keystone service catalog? Can you interact with it via the CLI?

A few housekeeping items relating to the use of email and the mailing list:

If you wish to gain the attention of particular communities on the 
openstack-dev mailing list, the best is to tag the subject line. In this 
particular case as you're targeting the OpenStack-Ansible community with 
questions you should add '[openstack-ansible]' as a tag in your subject line. 
If you 

Re: [openstack-dev] [glance][drivers] Spec freeze approaching: Review priorities

2015-12-28 Thread Flavio Percoco

On 17/12/15 19:34 +, Flavio Percoco wrote:

On 09/12/15 18:52 -0430, Flavio Percoco wrote:

Greetings,

To all Glance drivers and people interested in following up on Glance
specs. I've added to our meeting agenda etherpad[0] the list of review
priorities for specs.

Please, bare in mind that our spec freeze is approaching and we need
to provide as much feedback as possible on the proposed specs so that
spec writers will have enough time to address our comments.

As a reminder, the spec freeze for Glance will start on Mon 28th and
it'll end on Jan 1st.

Thanks everyone for your efforts,
Flavio

[0] https://etherpad.openstack.org/p/glance-drivers-meeting-agenda




Just another heads up that the above deadline is getting closer and
closer!

To all drivers, please help reviewing as many specs as possible. To
spec owners, keep an eye on your specs and address comments in a
timely manner so we can have them ready and merged in time.



Gentle reminder that we're in Glance's spec freeze week. Please, go
ahead and update your spec and reach out for final reviews. Some specs
will be merged this week and others will be -2'd. The specs that won't
be merged this week can be proposed for spec freeze exception starting
next week.

If your spec will require a spec freeze exception, please, use the
following tags in the email subject: `[glance]`, `[SFE]`


Since many folks are out on holidays this week, I'd recommend using
the mailing list as a way to comminicate with members of the glance
drivers team (or spec's reviewers in general).

Thank y'all and happy new year,
Flavio



Cheers,
Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Dragonflow] Support configuration of DB clusters

2015-12-28 Thread Gal Sagie
Hi Li Ma,

I think its a good idea.
I suggest for first stage, pass the CONF as optional parameter in addition
to the db_ip and db_port.
This way you will have minimum code changes at first patch.

If we see its working ok, we can later remove db_ip and db_port and adjust
the other
drivers in another patch.

Thanks
Gal.


On Mon, Dec 28, 2015 at 9:48 AM, Li Ma  wrote:

> My intention is to pass db-host-list (maybe it is defined in the conf
> file) to db backend drivers. I find that there's '**args' available
> [1], but it seems not working due to [2].
>
> I suggest to use a simpler method to allow user-defined configuration that
> is
> removing db_ip and db_port parameters and directly passing cfg.CONF
> object to db backend driver.
>
> In db_api.py, it should be:
> def initialize(self, config):
> self.config = config
>
> In api_nb.py, it should be:
> def initialize(self):
> self.driver.initialize(cfg.CONF) <-- from oslo_config
>
> As a result, let db backend developers choose which parameter to use.
>
> [1]
> https://github.com/openstack/dragonflow/blob/master/dragonflow/db/db_api.py#L21
> [2]
> https://github.com/openstack/dragonflow/blob/master/dragonflow/db/api_nb.py#L74-L75
>
> On Mon, Dec 28, 2015 at 9:12 AM, shihanzhang 
> wrote:
> >
> > good suggestion!
> >
> >
> > At 2015-12-25 19:07:10, "Li Ma"  wrote:
> >>Hi all, currently, we only support db_ip and db_port in the
> >>configuration file. Some DB SDK supports clustering, like Zookeeper.
> >>You can specify a list of nodes when client application starts to
> >>connect to servers.
> >>
> >>I'd like to implement this feature, specifying ['ip1:port',
> >>'ip2:port', 'ip3:port'] list in the configuration file. If only one
> >>server exists, just set it to ['ip1:port'].
> >>
> >>Any suggestions?
> >>
> >>--
> >>
> >>Li Ma (Nick)
> >>Email: skywalker.n...@gmail.com
> >>
>
> >>__
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
>
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wipe of the nodes' disks

2015-12-28 Thread Aleksandr Didenko
Hi,

> I want to propose not to wipe disks and simply unset bootable flag from
node disks.

AFAIK, removing bootable flag does not guarantee that system won't be
booted from the local drive. This is why erase_node is needed.

Regards,
Alex

On Fri, Dec 25, 2015 at 8:59 AM, Artur Svechnikov 
wrote:

> > When do we use the ssh_erase_nodes?
>
> It's used in stop_deployment provision stage [0] and for control reboot
> [1].
>
> > Is it a fall back mechanism if the mcollective fails?
>
> Yes it's like fall back mechanism, but it's used always [2].
>
> > That might have been a side effect of cobbler and we should test if it's
> > still an issue for IBP.
>
> As I can see from the code partition table always is wiped before
> provision [3].
>
> [0]
> https://github.com/openstack/fuel-astute/blob/master/lib/astute/provision.rb#L387-L396
> [1]
> https://github.com/openstack/fuel-astute/blob/master/lib/astute/provision.rb#L417-L425
> [2]
> https://github.com/openstack/fuel-astute/blob/master/lib/astute/provision.rb#L202-L208
> [3]
> https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.py#L194-L197
>
> Best regards,
> Svechnikov Artur
>
> On Thu, Dec 24, 2015 at 5:27 PM, Alex Schultz 
> wrote:
>
>> On Thu, Dec 24, 2015 at 1:29 AM, Artur Svechnikov
>>  wrote:
>> > Hi,
>> > We have faced the issue that nodes' disks are wiped after stop
>> deployment.
>> > It occurs due to the logic of nodes removing (this is old logic and
>> it's not
>> > actual already as I understand). This logic contains step which calls
>> > erase_node[0], also there is another method with wipe of disks [1].
>> AFAIK it
>> > was needed for smooth cobbler provision and ensure that nodes will not
>> be
>> > booted from disk when it shouldn't. Instead of cobbler we use IBP from
>> > fuel-agent where current partition table is wiped before provision
>> stage.
>> > And use disks wiping for insurance that nodes will not booted from disk
>> > doesn't seem good solution. I want to propose not to wipe disks and
>> simply
>> > unset bootable flag from node disks.
>> >
>> > Please share your thoughts. Perhaps some other components use the fact
>> that
>> > disks are wiped after node removing or stop deployment. If it's so, then
>> > please tell about it.
>> >
>> > [0]
>> >
>> https://github.com/openstack/fuel-astute/blob/master/lib/astute/nodes_remover.rb#L132-L137
>> > [1]
>> >
>> https://github.com/openstack/fuel-astute/blob/master/lib/astute/ssh_actions/ssh_erase_nodes.rb
>> >
>>
>> I thought the erase_node[0] mcollective action was the process that
>> cleared a node's disks after their removal from an environment. When
>> do we use the ssh_erase_nodes?  Is it a fall back mechanism if the
>> mcollective fails?  My understanding on the history is based around
>> needing to have the partitions and data wiped so that the LVM groups
>> and other partition information does not interfere with the
>> installation process the next time the node is provisioned.  That
>> might have been a side effect of cobbler and we should test if it's
>> still an issue for IBP.
>>
>>
>> Thanks,
>> -Alex
>>
>> [0]
>> https://github.com/openstack/fuel-astute/blob/master/mcagents/erase_node.rb
>>
>> > Best regards,
>> > Svechnikov Artur
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [senlin] Midcycle meetup (2016-01-11/12)

2015-12-28 Thread Yanyan Hu
Great! Can't wait to see you guys :)

2015-12-28 10:03 GMT+08:00 Qiming Teng :

> Dear all,
>
> Wish you all a merry christmas and a happy new year.
>
> Senlin team is planning a mid-cycle meetup next month in Beijing. Well,
> it goes beyond just a meetup between developers. We are inviting some
> users to share their real-life use cases and requirements.
>
> IBM Research China Lab will host the event. Please find the schedule
> etherpad here: https://etherpad.openstack.org/p/senlin-mitaka-midcycle
>
> Any comments/suggestions are welcomed. We are looking forward to see you
> guys.
>
> Regards,
>   Qiming
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,

Yanyan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #64

2015-12-28 Thread Emilien Macchi
Hello,

I added some items to our agenda:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151229
Feel free to add more topics, reviews, bugs, etc.

Some people are around this week so we will make our weekly meeting
tomorrow at UTC 1500.

See you there,

On 12/22/2015 09:42 AM, Emilien Macchi wrote:
> 
> 
> On 12/21/2015 03:54 PM, Emilien Macchi wrote:
>> For Christmas survivors, we can handle a weekly meeting tomorrow:
>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151222
> 
> The agenda looks empty (except 2 reviews that we can take of in async on
> IRC). So we postpone meeting #64 to next week (eventually).
> 
> Until this time, please update the etherpad if you have outstanding topics.
> 
>> If you have topics or reviews, go ahead in the etherpad.
>> Otherwise, we can chat on IRC and make triage during this week.
>>
>> I take the opportunity to wish Merry Christmas, Happy New Year 2016 to
>> all OpenStack contributors, I wish you good time with your family and
>> friends,
>> Take care,
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Removing the Tuskar repos

2015-12-28 Thread Emilien Macchi


On 12/22/2015 12:38 PM, Dougal Matthews wrote:
> Hi all,
> 
> I mentioned this at the meeting last week, but wanted to get wider
> input. As far as I can tell from the current activity, there is no work
> going into Tuskar and it isn't being tested with CI. This means the code
> is becoming more stale quickly and likely wont work soon (if not already).
> 
> TripleO common is working towards solving the same problems that Tuskar
> attempted and can be seen as the replacement for Tuskar. [1][2]
> 
> Are there any objections to it's removal? This would include the tuskar,
> python-tuskarclient and tuskar-ui repos. We would also need to remove it
> from instack-undercloud and tripleo-image-elements.
> 
> I'll start to beginning the cleanup process sometime in min/late January
> if there are no objections.
> 

By then, I'll proceed to the puppet-tuskar deprecation so we are synced
with upstream.

Please let me know any objection.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Liberty naming in Fuel 8.0

2015-12-28 Thread Igor Kalnitsky
Hi Sergii,

You've raised an old thread started by Oleg G. once again [1]. Last
time we didn't reach any agreements, but I'm sure that it would be
better to change version into "liberty-8.0" instead of "2015.2.0-8.0".

What do you think? It could be done easily with two patches - one to
nalgun, and one to library. And we have to merge it at once, in order
to do not break BVT. We'll need to update Fuel ISO on Fuel CI also,
because otherwise deployment will fail (nailgun uses version as
building block for making path to puppet manifests).

Regards,
Igor

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077135.html

On Fri, Dec 25, 2015 at 12:59 AM, Sergii Golovatiuk
 wrote:
> Hi crew,
>
> Looking at our repositories I have found a lot of '2015.1.0' references.
> According to [1] Liberty has different versioning scheme. Should we change
> them to '2015.2.0' to meet [2]?
>
> [1] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
> [2] https://wiki.openstack.org/wiki/Release_Naming
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][telemetry] gate-ceilometer-dsvm-integration broken

2015-12-28 Thread Julien Danjou
Hi there,

The gate for telemetry projects is broken:

  https://bugs.launchpad.net/heat/+bug/1529583

The failure appears in Heat from what I understand:

 BadRequest: Expecting to find domain in project - the server could not
 comply with the request since it is either malformed or otherwise
 incorrect. The client is assumed to be in error. (HTTP 400)
 (Request-ID: req-3f39cc92-c356-4b92-9ab8-401738c8d31d

I've dig a bit, and I *think* that the problem lies in this recent
devsatck patch:

  https://review.openstack.org/#/c/254755/

Could someone from Heat tell me if I'm a good Sherlock or if I am
completely out? :)

Cheers,
-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] RabbitMQ in dedicated network

2015-12-28 Thread Bogdan Dobrelya
On 23.12.2015 18:50, Matthew Mosesohn wrote:
> I agree. As far as I remember, rabbit needs fqdns to work and map
> correctly. I think it means we should disable the ability to move the
> internal messaging network role in order to fix this bug until we can
> add extra dns entries per network role (or at least addr)

For DNS resolve, we could use SRV [0] records perhaps.
Although, nodes rely on /etc/hosts instead, AFAIK.

So we could as well do net-template-based FQDNs instead, like
messaging-node*-domain.local 1.2.3.4
corosync-node*-domain.local 5.6.7.8
database-node*-domain.local 9.10.11.12

and rely on *these* FQDNS instead.

[0] https://en.wikipedia.org/wiki/SRV_record

> 
> On Dec 23, 2015 8:42 PM, "Andrew Maksimov"  > wrote:
> 
> Hi Kirill,
> 
> I don't think we can give up on using fqdn node names for RabbitMQ
> because we need to support TLS in the future. 
> 
> Thanks,
> Andrey Maximov
> Fuel Project Manager
> 
> On Wed, Dec 23, 2015 at 8:24 PM, Kyrylo Galanov
> > wrote:
> 
> Hello,
> 
> I would like to start discussion regarding the issue we have
> discovered recently [1].
> 
> In a nutshell, if RabbitMQ is configured to run in separate
> mgmt/messaging network it fails on building cluster.
> While RabbitMQ is managed by Pacemaker and OCF script, the
> cluster is built using FQDN. Apparently, FQDN resolves to admin
> network which is different in this particular case.
> As a result, RabbitMQ on secondary controller node fails to join
> to primary controller node.
> 
> I can suggest two ways to tackle the issue: one is pretty
> simple, while other is not.
> 
> The first way is to accept by design using admin network for
> RabbitMQ internal communication between controller nodes.
> 
> The second way is to dig into pacemaker
> and RabbitMQ reconfiguration. Since it requires to refuse from
> using common fqdn/node names, this approach can be argued.
> 
> 
> --
> [1] https://bugs.launchpad.net/fuel/+bug/1528707
> 
> Best regards,
> Kyrylo
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][telemetry] gate-ceilometer-dsvm-integration broken

2015-12-28 Thread Sergey Kraynev
Hi, Julien.

I suppose, that your guess is right.
Mentioned patch was merged recently and it broke our Ceilometer related
functional test.
There is a patch, which skip it. [1] and related bug [2]

We already have a revert for this staff [3] and patch for check based on
this revert [4].

[1] https://review.openstack.org/#/c/261272/
[2] https://bugs.launchpad.net/heat/+bug/1529058
[3] https://review.openstack.org/#/c/261308/
[4] https://review.openstack.org/#/c/261272/








On 28 December 2015 at 15:06, Julien Danjou  wrote:

> Hi there,
>
> The gate for telemetry projects is broken:
>
>   https://bugs.launchpad.net/heat/+bug/1529583
>
> The failure appears in Heat from what I understand:
>
>  BadRequest: Expecting to find domain in project - the server could not
>  comply with the request since it is either malformed or otherwise
>  incorrect. The client is assumed to be in error. (HTTP 400)
>  (Request-ID: req-3f39cc92-c356-4b92-9ab8-401738c8d31d
>
> I've dig a bit, and I *think* that the problem lies in this recent
> devsatck patch:
>
>   https://review.openstack.org/#/c/254755/
>
> Could someone from Heat tell me if I'm a good Sherlock or if I am
> completely out? :)
>
> Cheers,
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][telemetry] gate-ceilometer-dsvm-integration broken

2015-12-28 Thread Rabi Mishra
> Hi there,
> 
> The gate for telemetry projects is broken:
> 
>   https://bugs.launchpad.net/heat/+bug/1529583
> 
> The failure appears in Heat from what I understand:
> 
>  BadRequest: Expecting to find domain in project - the server could not
>  comply with the request since it is either malformed or otherwise
>  incorrect. The client is assumed to be in error. (HTTP 400)
>  (Request-ID: req-3f39cc92-c356-4b92-9ab8-401738c8d31d

Hi Julien,

We're already tracking this with bug[1] for heat. As a temporary fix we've 
disabled the ceilometer tests from heat dsvm gate jobs[2].

Yes, this has started happening after keystone/trusts config changes by the 
devstack patch you mentioned. I've no idea how this can be fixed. As Steve 
Hardy is away, either someone with keystone knowledge should fix this or we 
merge the devstack patch revert[3] that I tested few days ago.


[1] https://bugs.launchpad.net/heat/+bug/1529058
[2] https://review.openstack.org/#/c/261272/
[3] https://review.openstack.org/#/c/261308/

Regards,
Rabi
> 
> I've dig a bit, and I *think* that the problem lies in this recent
> devsatck patch:
> 
>   https://review.openstack.org/#/c/254755/
> 
> Could someone from Heat tell me if I'm a good Sherlock or if I am
> completely out? :)
> 
> Cheers,
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][telemetry] gate-ceilometer-dsvm-integration broken

2015-12-28 Thread Julien Danjou
On Mon, Dec 28 2015, Rabi Mishra wrote:

> Yes, this has started happening after keystone/trusts config changes by the
> devstack patch you mentioned. I've no idea how this can be fixed. As Steve
> Hardy is away, either someone with keystone knowledge should fix this or we
> merge the devstack patch revert[3] that I tested few days ago.

Why don't you just revert the devstack change?

This is way saner than disabling the test! Steve will be able to rework
his initial change when he come back.

I've commented on the patches with that.

Thanks guys!

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][artifacts][app-catalog] Proposal to move artifacts meeting time

2015-12-28 Thread Alexander Tivelkov
Hi!

This has been implemented: the Artifacts subteam meeting is moved to 17:00
UTC Mondays by patch [1].

However, since we are still deep in the holiday season (and the significant
part of the team will be on PTO during the whole next week), I propose to
cancel both todays' and the next week's meetings and have the next Glare
IRC sync-up on January 11th, 17:00 UTC

Have a happy new year!

[1] https://review.openstack.org/#/c/260998

On Wed, Dec 23, 2015 at 5:27 PM Alexander Tivelkov 
wrote:

> Thanks for voting.
>
> The most popular option is 17:00 UTC Mondays. Unfortunately the
> #openstack-meeting-4 channel turned out to be occupied at this timeslot, so
> I propose to change the channel to #openstack-meeting-alt
>
> I've submitted a patch to irc-meeting infra repo:
> https://review.openstack.org/#/c/260998
> Please vote for that patch if the channel change is ok to you.
>
> Thanks!
>
> On Mon, Dec 21, 2015 at 6:34 AM Nikhil Komawar 
> wrote:
>
>> Thanks Alex. This is a good idea. Please propose a review for the change
>> of schedule so that we can be assured the tests pass and decision would be
>> accepted.
>>
>>
>> On 12/18/15 9:20 AM, Alexander Tivelkov wrote:
>>
>> Hi folks,
>>
>> The current timeslot of our weekly IRC meeting for artifact subteam
>> (14:00 UTC Mondays) seems a bit inconvenient: it's a bit early for people
>> in the Pacific timezone. Since we want to maximise the presence of all the
>> interested parties at our sync-ups, I'd propose to move our meeting to some
>> later timeslot. I'd prefer it to remain in #openstack-meeting-4 (since all
>> the rest Glancy meetings are there) and be several days ahead of the main
>> Glance meeting (which is on Thursdays).
>>
>> I've checked the current openstack meetings schedule and found some slots
>> which may be more convenient then the current one. I've put them in doodle
>> at http://doodle.com/poll/7krdfp96kttnvmg7 - please vote there for the
>> slots which are ok for you. Then I'll make a patch to irc-meetings infra
>> repo.
>>
>> Thanks!
>> --
>> Regards,
>> Alexander Tivelkov
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> --
>>
>> Thanks,
>> Nikhil
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Regards,
> Alexander Tivelkov
>
-- 
Regards,
Alexander Tivelkov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] No meeting on December 29, 2015

2015-12-28 Thread Devdatta Kulkarni
Hi team,

We will not be holding our weekly IRC meeting on December 29 due to holidays.
We will convene again on January 5.

Regards,
Devdatta
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Cancelling team meeting today - 12/28/2015

2015-12-28 Thread Renat Akhmerov
Team,

We’re cancelling the team meeting today since a number of key team members 
won’t be able to attend.

Renat Akhmerov
@ Mirantis Inc.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuring ISC dhclient on guest to acquire ipv6 default gateway

2015-12-28 Thread Andrei Radulescu-Banu
Thanks for your help, Vladimir. You are right, now that I look more closely, in 
dhcpv6-stateful mode the default gateway is acquired through router 
advertisement (instead of dhcp6 options). And the default gateway I get is the 
link-local address instead of the configured gateway_ip of 1:2:3:4::1.

This was confusing, to say the least - and not well explained anywhere - but it 
is working. I've tested all three modes of configuration (ipv6_address_mode and 
ipv6_ra_mode both simultaneously set to dhcpv6-stateful, dhcpv6-stateless or 
slaac). In all cases I am acquiring a default gateway - and it is the link 
local address of the default gateway. I can tell it works because the default 
gateway is pingable through the interface, and the low 3 bytes of the default 
gateway ip6 address matches the low 3 bytes of the gateway MAC address. (The 
MAC address is viewable when reviewing the router interfaces through the 
Horizon UI, for example - the actual link local ip6 addresses of the default 
gateway are not displayed).

Best,
Andrei


--

Message: 2
Date: Fri, 25 Dec 2015 19:07:31 +0300
From: Vladimir Eremin 
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] Configuring ISC dhclient on guest to
acquire ipv6 default gateway
Message-ID: <5864af60-644c-438e-9d82-80c4fced1...@mirantis.com>
Content-Type: text/plain; charset=utf-8

Hi Andrei,

Default gateways for IPv6 is always configured with Router Advertisements (RA), 
no matter what addressing mode is used. Please ensure that:

- you have a virtual router connected to your IPv6 subnet, this would provide 
RA (and actual router) in your network
- accept_ra is enabled in your guest OS: sysctl -a | grep accept_ra

-- 
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova scheduler startup when database is not available

2015-12-28 Thread Jay Pipes

On 12/24/2015 02:30 PM, Clint Byrum wrote:

This is entirely philosophical, but we should think about when it is
appropriate to adopt which mode of operation.

There are basically two ways being discussed:

1) Fail fast.
2) Retry forever.

Fail fast pros- Immediate feedback for problems, no zombies to worry
about staying dormant and resurrecting because their configs accidentally
become right again. Much more determinism. Debugging is much simpler. To
summarize, it's up and working, or down and not.

Fail fast cons- Ripple effects. If you have a database or network blip
while services are starting, you must be aware of all of the downstream
dependencies and trigger them to start again, or have automation which
retries forever, giving up some of the benefits of fail-fast. Circular
dependencies require special workflow to unroll (Service1 aspect A relies
on aspect X of service2, service2 aspect X relies on aspect B of service1
which would start fine without service2).  To summarize: this moves the
retry-forever problem to orchestration, and complicates some corner cases.

Retry forever pros- Circular dependencies are cake. Blips auto-recover.
Bring-up orchestration is simpler (start everything, wait..). To
summarize: this makes orchestration simpler.

Retry forever cons- Non-determinism. It's impossible to just look at the
thing from outside and know if it is ready to do useful work. May
actually be hiding intermittent problems, requiring more logging and
indicators in general to allow analysis.

I honestly think any distributed system needs both.


So do I. I was proposing only that we deal with unrecoverable 
configuration errors on startup in a fail-fast way. I was not proposing 
that we remove the existing functionality that retries requests in the 
occasion where an already-up-and-running scheduler service experiences 
(typically transient) I/O disruptions to a dependent service like the DB 
or MQ.




That said, the scheduler is, IMO, an _extremely_ complex piece of
OpenStack, with up and down stream dependencies on several levels (which
is why redesigning it gets debated so often on openstack-dev).


It's actually not all that complex. Or at least, it doesn't need to be :)

Best,
-jay

> Making

it fail fast would complicate the process of bringing and keeping an
OpenStack cloud up. There are probably some benefits I haven't thought
of, but the main benefit you stated would be that one would know when
their configuration tooling was wrong and giving their scheduler the
wrong database information, which is not, IMO, a hard problem (one can
read the config file after all). But I'm sure we could think of more if
we tried hard.

I hope I'm not too vague here.. I *want* fail-fast on everything.
However, I also don't think it can just be a blanket policy without
requiring everybody to deploy complex orchestration on top.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wipe of the nodes' disks

2015-12-28 Thread Andrew Woodward
In order to ensure that LVM can be configured as desired, its necessary to
purge them and then reboot the node, otherwise the partitioning commands
will most likely fail on the next attempt as they will be initialized
before we can start partitioning the node. Hence, when a node is removed
from the environment, it is supposed to have this data destroyed. Since
it's a running system, the most effective way was to blast the first 1Mb of
each partition. (with out many more reboots)

As to the fallback to SSH, there are two times we use this process, with
the node reboot (after cobbler/IBP finishes), and with the wipe as we are
discussing here. These are for the odd occurrences of the nodes failing to
restart after the MCO command. I don't think anyone has had much success
trying to figure out why this occurs, but I've seen nodes get stuck in
provisioning and remove in multiple environments using 6.1 where they
managed to break the SSH Fallback. It would occur around 1/20 nodes
seemingly randomly. So with the SSH fallback I nearly never see the failure
in node reboot

On Thu, Dec 24, 2015 at 6:28 AM Alex Schultz  wrote:

> On Thu, Dec 24, 2015 at 1:29 AM, Artur Svechnikov
>  wrote:
> > Hi,
> > We have faced the issue that nodes' disks are wiped after stop
> deployment.
> > It occurs due to the logic of nodes removing (this is old logic and it's
> not
> > actual already as I understand). This logic contains step which calls
> > erase_node[0], also there is another method with wipe of disks [1].
> AFAIK it
> > was needed for smooth cobbler provision and ensure that nodes will not be
> > booted from disk when it shouldn't. Instead of cobbler we use IBP from
> > fuel-agent where current partition table is wiped before provision stage.
> > And use disks wiping for insurance that nodes will not booted from disk
> > doesn't seem good solution. I want to propose not to wipe disks and
> simply
> > unset bootable flag from node disks.
> >
> > Please share your thoughts. Perhaps some other components use the fact
> that
> > disks are wiped after node removing or stop deployment. If it's so, then
> > please tell about it.
> >
> > [0]
> >
> https://github.com/openstack/fuel-astute/blob/master/lib/astute/nodes_remover.rb#L132-L137
> > [1]
> >
> https://github.com/openstack/fuel-astute/blob/master/lib/astute/ssh_actions/ssh_erase_nodes.rb
> >
>
> I thought the erase_node[0] mcollective action was the process that
> cleared a node's disks after their removal from an environment. When
> do we use the ssh_erase_nodes?  Is it a fall back mechanism if the
> mcollective fails?  My understanding on the history is based around
> needing to have the partitions and data wiped so that the LVM groups
> and other partition information does not interfere with the
> installation process the next time the node is provisioned.  That
> might have been a side effect of cobbler and we should test if it's
> still an issue for IBP.
>
>
> Thanks,
> -Alex
>
> [0]
> https://github.com/openstack/fuel-astute/blob/master/mcagents/erase_node.rb
>
> > Best regards,
> > Svechnikov Artur
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Adding Ubuntu Liberty to Kolla-Mitaka

2015-12-28 Thread Michał Jastrzębski
Hey,

So one thing we need to consider there is that currently 3 options we
support (binary+souce centos and source ubuntu) basically behaves the
same way - it install current master (or last nights master, which is
close enough). This one will have fundamentally different behavior,
and we need to make that clear. This might be documentation issue, but
I feel we need to make sure that it's there.

Cheers,
inc0

On 28 December 2015 at 10:16, Steven Dake (stdake)  wrote:
> Hey folks,
>
> I have received significant feedback that the lack of Ubuntu binary support
> is a problem for Kolla adoption.  Still, we had nobody to do the work, so we
> held off approving the blueprint.  There were other reasons such as:
>
> There is no delorean style repository for debian meaning we would always be
> installing Liberty with our Mitaka tree
> Given the first problem, a gate may not be feasible – a voting gate would
> never be feasible
>
>
> Still, I think on balance, the pain here is worth the gain.  We could just
> state we wont block any activity on the Debian binary release (this includes
> tagging, releasing, etc) and mark it as "technical preview" until we sort
> out the Liberty->Mitaka port in the stable branch.  I'd like other core
> reviewers thoughts?  I really really want this feature, even if tis tech
> preview and marked with all kinds of warnings.  Without it Kolla is
> incomplete.
>
> This review is terrible (it needs to be broken up into separate patches) but
> its a huge step in the right direction IMO :)
>
> https://review.openstack.org/#/c/260069/2
>
> Comments and thoughts welcome.
>
> Regards
> -stevve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Query about re-directing incoming traffic.

2015-12-28 Thread Vikram Choudhary
On Mon, Dec 28, 2015 at 10:20 PM, Jay Pipes  wrote:

> On 12/28/2015 11:13 AM, Vikram Choudhary wrote:
>
>> Hi All,
>>
>> We want to redirect all / some specific incoming traffic to a particular
>> neutron port, where a network function is deployed. [Network function
>> could be DPI, IDS, Firewall, Classifier, etc]. In this regard, we have
>> few queries:
>>
>> 1. How we can achieve this?
>>
>> 2. Do we have well-defined NBI's for such use-case?
>>
>
> What are NBIs?
>
Vikram: Does neutron already support any API's for achieving this?


> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Adding Ubuntu Liberty to Kolla-Mitaka

2015-12-28 Thread Steven Dake (stdake)
Hey folks,

I have received significant feedback that the lack of Ubuntu binary support is 
a problem for Kolla adoption.  Still, we had nobody to do the work, so we held 
off approving the blueprint.  There were other reasons such as:

  *   There is no delorean style repository for debian meaning we would always 
be installing Liberty with our Mitaka tree
  *   Given the first problem, a gate may not be feasible - a voting gate would 
never be feasible

Still, I think on balance, the pain here is worth the gain.  We could just 
state we wont block any activity on the Debian binary release (this includes 
tagging, releasing, etc) and mark it as "technical preview" until we sort out 
the Liberty->Mitaka port in the stable branch.  I'd like other core reviewers 
thoughts?  I really really want this feature, even if tis tech preview and 
marked with all kinds of warnings.  Without it Kolla is incomplete.

This review is terrible (it needs to be broken up into separate patches) but 
its a huge step in the right direction IMO :)

https://review.openstack.org/#/c/260069/2

Comments and thoughts welcome.

Regards
-stevve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wipe of the nodes' disks

2015-12-28 Thread Ryan Moe
>
>
> It's used in stop_deployment provision stage [0] and for control reboot
> [1].
>
> > Is it a fall back mechanism if the mcollective fails?
>
> Yes it's like fall back mechanism, but it's used always [2].
>

As I remember it the use of SSH for stopping provisioning was because of
our use of OS installers. While Anaconda was running the only access to the
system was with SSH.


>
> > That might have been a side effect of cobbler and we should test if it's
> > still an issue for IBP.
>
> As I can see from the code partition table always is wiped before
> provision [3].
>

There were some intermittent issues with provisioning failures
(particularly with Ceph as I recall) when we only wiped the disks before
provisioning. These failures were caused by stale LVM and RAID metadata.
Doing it when the nodes were deleted and again before provisioning fixed
these problems. I'm not sure this was a side-effect of our old provisioning
method either. When we relied on the OS installers we generated kickstart
and preseed files that just used the standard LVM and mdadm utilities to
partition the system.

Thanks,
-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Query about re-directing incoming traffic.

2015-12-28 Thread Jay Pipes

On 12/28/2015 11:13 AM, Vikram Choudhary wrote:

Hi All,

We want to redirect all / some specific incoming traffic to a particular
neutron port, where a network function is deployed. [Network function
could be DPI, IDS, Firewall, Classifier, etc]. In this regard, we have
few queries:

1. How we can achieve this?

2. Do we have well-defined NBI's for such use-case?


What are NBIs?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova scheduler startup when database is not available

2015-12-28 Thread Fox, Kevin M
Another data point.. I've had to work around daemons failing fast as discussed 
below when working with docker-compose. It doesn't have nice dependency 
handling yet, and during the initial bootstrap of all the containers in a pod, 
some can fail due to not sticking around long enough for the things to init. 
Its kind of painful. Fail fast has some nice features, but retry forever is 
often very useful in the field.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, December 28, 2015 9:45 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Nova scheduler startup when database is not 
available

On 12/24/2015 02:30 PM, Clint Byrum wrote:
> This is entirely philosophical, but we should think about when it is
> appropriate to adopt which mode of operation.
>
> There are basically two ways being discussed:
>
> 1) Fail fast.
> 2) Retry forever.
>
> Fail fast pros- Immediate feedback for problems, no zombies to worry
> about staying dormant and resurrecting because their configs accidentally
> become right again. Much more determinism. Debugging is much simpler. To
> summarize, it's up and working, or down and not.
>
> Fail fast cons- Ripple effects. If you have a database or network blip
> while services are starting, you must be aware of all of the downstream
> dependencies and trigger them to start again, or have automation which
> retries forever, giving up some of the benefits of fail-fast. Circular
> dependencies require special workflow to unroll (Service1 aspect A relies
> on aspect X of service2, service2 aspect X relies on aspect B of service1
> which would start fine without service2).  To summarize: this moves the
> retry-forever problem to orchestration, and complicates some corner cases.
>
> Retry forever pros- Circular dependencies are cake. Blips auto-recover.
> Bring-up orchestration is simpler (start everything, wait..). To
> summarize: this makes orchestration simpler.
>
> Retry forever cons- Non-determinism. It's impossible to just look at the
> thing from outside and know if it is ready to do useful work. May
> actually be hiding intermittent problems, requiring more logging and
> indicators in general to allow analysis.
>
> I honestly think any distributed system needs both.

So do I. I was proposing only that we deal with unrecoverable
configuration errors on startup in a fail-fast way. I was not proposing
that we remove the existing functionality that retries requests in the
occasion where an already-up-and-running scheduler service experiences
(typically transient) I/O disruptions to a dependent service like the DB
or MQ.


> That said, the scheduler is, IMO, an _extremely_ complex piece of
> OpenStack, with up and down stream dependencies on several levels (which
> is why redesigning it gets debated so often on openstack-dev).

It's actually not all that complex. Or at least, it doesn't need to be :)

Best,
-jay

 > Making
> it fail fast would complicate the process of bringing and keeping an
> OpenStack cloud up. There are probably some benefits I haven't thought
> of, but the main benefit you stated would be that one would know when
> their configuration tooling was wrong and giving their scheduler the
> wrong database information, which is not, IMO, a hard problem (one can
> read the config file after all). But I'm sure we could think of more if
> we tried hard.
>
> I hope I'm not too vague here.. I *want* fail-fast on everything.
> However, I also don't think it can just be a blanket policy without
> requiring everybody to deploy complex orchestration on top.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] RabbitMQ in dedicated network

2015-12-28 Thread Andrew Woodward
On Mon, Dec 28, 2015 at 1:13 AM Bogdan Dobrelya 
wrote:

> On 23.12.2015 18:50, Matthew Mosesohn wrote:
> > I agree. As far as I remember, rabbit needs fqdns to work and map
> > correctly. I think it means we should disable the ability to move the
> > internal messaging network role in order to fix this bug until we can
> > add extra dns entries per network role (or at least addr)
>
> For DNS resolve, we could use SRV [0] records perhaps.
> Although, nodes rely on /etc/hosts instead, AFAIK.
>
> So we could as well do net-template-based FQDNs instead, like
> messaging-node*-domain.local 1.2.3.4
> corosync-node*-domain.local 5.6.7.8
> database-node*-domain.local 9.10.11.12
>
> and rely on *these* FQDNS instead.
>

This is probably going to be the best way to work out this issue since we
can move all of these services around as it is. I would attempt to remove
the node identifier if possible so the names aren't wrong if the service is
moved between nodes.


> [0] https://en.wikipedia.org/wiki/SRV_record
>
> >
> > On Dec 23, 2015 8:42 PM, "Andrew Maksimov"  > > wrote:
> >
> > Hi Kirill,
> >
> > I don't think we can give up on using fqdn node names for RabbitMQ
> > because we need to support TLS in the future.
> >
> > Thanks,
> > Andrey Maximov
> > Fuel Project Manager
> >
> > On Wed, Dec 23, 2015 at 8:24 PM, Kyrylo Galanov
> > > wrote:
> >
> > Hello,
> >
> > I would like to start discussion regarding the issue we have
> > discovered recently [1].
> >
> > In a nutshell, if RabbitMQ is configured to run in separate
> > mgmt/messaging network it fails on building cluster.
> > While RabbitMQ is managed by Pacemaker and OCF script, the
> > cluster is built using FQDN. Apparently, FQDN resolves to admin
> > network which is different in this particular case.
> > As a result, RabbitMQ on secondary controller node fails to join
> > to primary controller node.
> >
> > I can suggest two ways to tackle the issue: one is pretty
> > simple, while other is not.
> >
> > The first way is to accept by design using admin network for
> > RabbitMQ internal communication between controller nodes.
> >
> > The second way is to dig into pacemaker
> > and RabbitMQ reconfiguration. Since it requires to refuse from
> > using common fqdn/node names, this approach can be argued.
> >
> >
> > --
> > [1] https://bugs.launchpad.net/fuel/+bug/1528707
> >
> > Best regards,
> > Kyrylo
> >
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Query about re-directing incoming traffic.

2015-12-28 Thread Vikram Choudhary
Hi All,

We want to redirect all / some specific incoming traffic to a particular
neutron port, where a network function is deployed. [Network function could
be DPI, IDS, Firewall, Classifier, etc]. In this regard, we have few
queries:

1. How we can achieve this?

2. Do we have well-defined NBI's for such use-case?

Any thought / suggestion will be appreciated.

Thanks
Vikram
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Adding Ubuntu Liberty to Kolla-Mitaka

2015-12-28 Thread Sam Yaple
>We could just state we wont block any activity on the Debian binary
release (this includes tagging, releasing, etc)

I think thats key. If cloud-archive is available before we tag then
fantastic, but I don't want to hold it up because of this. Additionally, we
could do experimental gates for this and potentially a non-voting binary
build gate. I doubt we could do a deploy gate because it will always fail.
You can't run Liberty with our Mitaka code and because of that the gate
would be broken almost immediately after a release.

Artur's patch is being broken up into several patches right now. With the
pace he is going the code should be mergable before the end of the week.

Sam Yaple

On Mon, Dec 28, 2015 at 4:28 PM, Michał Jastrzębski 
wrote:

> Hey,
>
> So one thing we need to consider there is that currently 3 options we
> support (binary+souce centos and source ubuntu) basically behaves the
> same way - it install current master (or last nights master, which is
> close enough). This one will have fundamentally different behavior,
> and we need to make that clear. This might be documentation issue, but
> I feel we need to make sure that it's there.
>
> Cheers,
> inc0
>
> On 28 December 2015 at 10:16, Steven Dake (stdake) 
> wrote:
> > Hey folks,
> >
> > I have received significant feedback that the lack of Ubuntu binary
> support
> > is a problem for Kolla adoption.  Still, we had nobody to do the work,
> so we
> > held off approving the blueprint.  There were other reasons such as:
> >
> > There is no delorean style repository for debian meaning we would always
> be
> > installing Liberty with our Mitaka tree
> > Given the first problem, a gate may not be feasible – a voting gate would
> > never be feasible
> >
> >
> > Still, I think on balance, the pain here is worth the gain.  We could
> just
> > state we wont block any activity on the Debian binary release (this
> includes
> > tagging, releasing, etc) and mark it as "technical preview" until we sort
> > out the Liberty->Mitaka port in the stable branch.  I'd like other core
> > reviewers thoughts?  I really really want this feature, even if tis tech
> > preview and marked with all kinds of warnings.  Without it Kolla is
> > incomplete.
> >
> > This review is terrible (it needs to be broken up into separate patches)
> but
> > its a huge step in the right direction IMO :)
> >
> > https://review.openstack.org/#/c/260069/2
> >
> > Comments and thoughts welcome.
> >
> > Regards
> > -stevve
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova scheduler startup when database is not available

2015-12-28 Thread Jay Pipes

On 12/23/2015 08:35 PM, Morgan Fainberg wrote:

On Wed, Dec 23, 2015 at 10:32 AM, Jay Pipes > wrote:

On 12/23/2015 12:27 PM, Lars Kellogg-Stedman wrote:

I've been looking into the startup constraints involved when
launching
Nova services with systemd using Type=notify (which causes
systemd to
wait for an explicit notification from the service before
considering
it to be "started".  Some services (e.g., nova-conductor) will
happily
"start" even if the backing database is currently unavailable (and
will enter a retry loop waiting for the database).

Other services -- specifically, nova-scheduler -- will block waiting
for the database *before* providing systemd with the necessary
notification.

nova-scheduler blocks because it wants to initialize a list of
available aggregates (in
scheduler.host_manager.HostManager.__init__),
which it gets by calling objects.AggregateList.get_all.

Does it make sense to block service startup at this stage?  The
database disappearing during runtime isn't a hard error -- we will
retry and reconnect when it comes back -- so should the same
situation
at startup be a hard error?  As an operator, I am more interested in
"did my configuration files parse correctly?" at startup, and would
generally prefer the service to start (and permit any dependent
services to start) even when the database isn't up (because that's
probably a situation of which I am already aware).


If your configuration file parsed correctly but has the wrong
database connection URI, what good is the service in an active
state? It won't be able to do anything at all.

This is why I think it's better to have hard checks like for
connections on startup and not have services active if they won't be
able to do anything useful.


Are you advocating that scheduler bails out and ceases to run or that it
doesn't mark itself as active? I am in favour of the second scenario but
not the first. There are cases where it would be nice to start the
scheduler and have it at least report "hey I can't contact the DB" but
not mark itself active, but continue to run and on  report/try
to reconnect.


I am in favor of the service not starting at all if the database cannot 
be connected to in a "test connection" scenario.



It isn't clear which level of "hard check" you're advocating in your
response and I want to clarify for the sake of conversation.


If the scheduler cannot contact the database, it cannot do anything 
useful at all. I don't see the point of having the service daemon "up" 
if it cannot do anything useful.


Most monitoring tooling (Nagios or nginx for simple load balancing) and 
distributed service management (Zookeeper) look at whether a service is 
responding on some port to determine if the service is up. If the 
service responds on said port, but cannot do anything useful, the 
information is less than useful...it's harmful, IMHO.


For errors that are recoverable, sure keep the service up and running 
and retry the condition that is recoverable. But in the case of bad 
configuration, it's not a recoverable error, and I don't think the 
service should be started at all.


Hope that clears things up.

Best,
-jay


It would be relatively easy to have the scheduler lazy-load the list
of aggregates on first references, rather than at __init__.


Sure, but if the root cause of the issue is a problem due to
misconfigured connection string, then that lazy-load will just bomb
out and the scheduler will be useless anyway. I'd rather have a
fail-early/fast occur here than a fail-late.

Best,
-jay

 > I'm not

familiar enough with the nova code to know if there would be any
undesirable implications of this behavior.  We're already punting
initializing the list of instances to an asynchronous task in
order to
avoid blocking service startup.

Does it make sense to permit nova-scheduler to complete service
startup in the absence of the database (and then retry the
connection
in the background)?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [Infra] Meeting Tuesday December 29th at 19:00 UTC

2015-12-28 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is not skipping meetings
through the end of month holidays, so we are having our next weekly
meeting as scheduled on Tuesday December 29th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-22-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-22-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-22-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Mid-cycle Sprint

2015-12-28 Thread David Lyle
The Horizon mid-cycle sprint is in Hillsboro, Oregon Feb 23-25 and
hosted at the Intel site in Hillsboro just west of Portland.

The wiki for the mid-cycle sprint is
https://wiki.openstack.org/wiki/Sprints/HorizonMitakaSprint

Please note your intention to attend on the wiki page.

Thanks,
David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Dragonflow] Atomic update doesn't work with etcd-driver

2015-12-28 Thread Gal Sagie
Hi Li Ma,

I haven't investigated the root problem yet as i seen it at the end of day
yesterday.
However if you look at this test : https://review.openstack.org/#/c/261997/
 it verify that the correct number of flows
are installed after a clean devstack process.

What i noticed with this patch is that devstack stack process finish
successfully, but the controller
had a repeating exception and not all the flows were configured.
I verified it happens few times to make sure.

Hopefully when the patch merge we will have gate test that check this kind
of scenario.
I am not sure what is the problem root cause, but we will need to
investigate it as
maybe its hiding another bug that isn't directly related to that patch.

Gal.


On Tue, Dec 29, 2015 at 4:06 AM, Li Ma  wrote:

> Hi Gal, you reverted this patch [1] due to the broken pipeline. Could
> you provide some debug information or detailed description? When I run
> my devstack, I cannot reproduce the sync problem.
>
> [1]
> https://github.com/openstack/dragonflow/commit/f83dd5795d54e1a70b8bdec1e6dd9f7815eb6546
>
> --
>
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova scheduler startup when database is not available

2015-12-28 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2015-12-28 09:45:39 -0800:
> On 12/24/2015 02:30 PM, Clint Byrum wrote:
> > This is entirely philosophical, but we should think about when it is
> > appropriate to adopt which mode of operation.
> >
> > There are basically two ways being discussed:
> >
> > 1) Fail fast.
> > 2) Retry forever.
> >
> > Fail fast pros- Immediate feedback for problems, no zombies to worry
> > about staying dormant and resurrecting because their configs accidentally
> > become right again. Much more determinism. Debugging is much simpler. To
> > summarize, it's up and working, or down and not.
> >
> > Fail fast cons- Ripple effects. If you have a database or network blip
> > while services are starting, you must be aware of all of the downstream
> > dependencies and trigger them to start again, or have automation which
> > retries forever, giving up some of the benefits of fail-fast. Circular
> > dependencies require special workflow to unroll (Service1 aspect A relies
> > on aspect X of service2, service2 aspect X relies on aspect B of service1
> > which would start fine without service2).  To summarize: this moves the
> > retry-forever problem to orchestration, and complicates some corner cases.
> >
> > Retry forever pros- Circular dependencies are cake. Blips auto-recover.
> > Bring-up orchestration is simpler (start everything, wait..). To
> > summarize: this makes orchestration simpler.
> >
> > Retry forever cons- Non-determinism. It's impossible to just look at the
> > thing from outside and know if it is ready to do useful work. May
> > actually be hiding intermittent problems, requiring more logging and
> > indicators in general to allow analysis.
> >
> > I honestly think any distributed system needs both.
> 
> So do I. I was proposing only that we deal with unrecoverable 
> configuration errors on startup in a fail-fast way. I was not proposing 
> that we remove the existing functionality that retries requests in the 
> occasion where an already-up-and-running scheduler service experiences 
> (typically transient) I/O disruptions to a dependent service like the DB 
> or MQ.
> 

Even during startup, failing fast on remote dependencies complicates
things. There's no dependency resolver for the entire cloud, as Kevin
Fox suggested.

> 
> > That said, the scheduler is, IMO, an _extremely_ complex piece of
> > OpenStack, with up and down stream dependencies on several levels (which
> > is why redesigning it gets debated so often on openstack-dev).
> 
> It's actually not all that complex. Or at least, it doesn't need to be :)
> 

On this we definitely agree.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-12-28 Thread Dmitry Borodaenko
+1 for "fuel: recheck". A nice to have addition would be:
"fuel: recheck verify-fuel-library-tasks"

to retrigger just one failed job.

-- 
Dmitry Borodaenko


On Mon, Nov 23, 2015 at 01:32:35PM +, Bob Ball wrote:
> There was a conversation a while ago around explicitly avoiding the empty 
> namespace - see 
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/041238.html
> 
> The approach I have used since is "xenserver: recheck" and "xen: recheck".
> 
> I think the appropriate command should be "fuel: recheck".
> 
> Bob
> 
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: 20 November 2015 21:36
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI
> > jobs
> > 
> > Why not "recheck fuel" to align with how other OpenStack 3rd party CI
> > hooks work? See: recheck xen-server or recheck hyper-v
> > 
> > Best,
> > -jay
> > 
> > On 11/20/2015 05:24 AM, Igor Belikov wrote:
> > > Alexey,
> > >
> > > First of all, "refuel" sounds very cool.
> > > Thanks for raising this topic, I would like to hear more opinions here.
> > > On one hand, different keyword would help to prevent unnecessary
> > > infrastructure load, I agree with you on that. And on another hand,
> > > using existing keywords helps to avoid confusion and provides expected
> > > behaviour for our CI jobs. Far too many times I've heard questions like
> > > "Why 'recheck' doesn't retrigger Fuel CI jobs?".
> > >
> > > So I would like to hear more thoughts here from our developers. And I
> > > will investigate how another third party CI systems handle this questions.
> > > --
> > > Igor Belikov
> > > Fuel CI Engineer
> > > ibeli...@mirantis.com 
> > >
> > >
> > >
> > >
> > >
> > >
> > >> On 20 Nov 2015, at 16:00, Alexey Shtokolov  > >> > wrote:
> > >>
> > >> Igor,
> > >>
> > >> Thank you for this feature.
> > >> Afaiu recheck/reverify is mostly useful for internal CI-related fails.
> > >> And Fuel CI and Openstack CI are two different infrastructures.
> > >> So if smth is broken on Fuel CI, "recheck" will restart all jobs on
> > >> Openstack CI too. And opposite case works the same way.
> > >>
> > >> Probably we should use another keyword for Fuel CI to prevent an extra
> > >> load on the infrastructure? For example "refuel" or smth like this?
> > >>
> > >> Best regards,
> > >> Alexey Shtokolov
> > >>
> > >> 2015-11-20 14:24 GMT+03:00 Stanislaw Bogatkin
> >  > >> >:
> > >>
> > >> Igor,
> > >>
> > >> it is much more clear for me now. Thank you :)
> > >>
> > >> On Fri, Nov 20, 2015 at 2:09 PM, Igor Belikov
> > >> > wrote:
> > >>
> > >> Hi Stanislaw,
> > >>
> > >> The reason behind this is simple - deployment tests are heavy.
> > >> Each deployment test occupies whole server for ~2 hours, for
> > >> each commit we have 2 deployment tests (for current
> > >> fuel-library master) and that's just because we don't test
> > >> CentOS deployment for now.
> > >> If we assume that developers will rertrigger deployment tests
> > >> only when retrigger would actually solve the failure - it's
> > >> still not smart in terms of HW usage to retrigger both tests
> > >> when only one has failed, for example.
> > >> And there are cases when retrigger just won't do it and CI
> > >> Engineer must manually erase the existing environment on slave
> > >> or fix it by other means, so it's better when CI Engineer
> > >> looks through logs before each retrigger of deployment test.
> > >>
> > >> Hope this answers your question.
> > >>
> > >> --
> > >> Igor Belikov
> > >> Fuel CI Engineer
> > >> ibeli...@mirantis.com 
> > >>
> > >>> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin
> > >>> >
> > wrote:
> > >>>
> > >>> Hi Igor,
> > >>>
> > >>> would you be so kind tell, why fuel-library deployment tests
> > >>> doesn't support this? Maybe there is a link with previous
> > >>> talks about it?
> > >>>
> > >>> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov
> > >>> > wrote:
> > >>>
> > >>> Hi,
> > >>>
> > >>> I'd like to inform you that all jobs running on Fuel CI
> > >>> (with the exception of fuel-library deployment tests) now
> > >>> support retriggering via "recheck" or "reverify" comments
> > >>> in Gerrit.
> > >>> Exact regex is the same one used in Openstack-Infra's
> > >>> zuul and can be found here
> > >>> 

[openstack-dev] [Kuryr] Progress Update and Kubernetes Integration

2015-12-28 Thread Gal Sagie
Hello everyone,

Just wanted to give you all some progress update at what we have been doing
in Kuryr,
We conducted an IRC meeting today, you can see the logs here [1]

1) We got Docker pluggable IPAM support in Kuryr thanks to Vikas Choudhary,
there
   are still some small points to address but most of the code is already
merged.
   I plan to write a blog post about it describing the mechanism

2) Mohammad Banikazemi verified Kuryr works with Docker Swarm seamlessly
   and we are compatible with latest Docker libnetwork

3) We have fullstack job running in the gate, basically these tests are
running with a
working Openstack environment with Kuryr deployed and using Neutron
client and
Docker python client to simulate different scenarios and test Kuryr
functionality
in addition to our unit tests.

4) We have Rally job running, we plan to contribute Docker plugin to Rally
and have
some of the above tests run benchmarking operation, i think this is a
very important
goal as it will also help us benchmark different Neutron backends and
solutions
in terms of containers networking and let users/operators have better
comparison
between their different options.

5) We are working on packaging for Kuryr (Thanks to Jaume Devesa for that)

6) We are working on investigating using Linux CAPABILITIES rather then
using
rootwrap or running as root for Kuryr (Thanks to Antoni Segura Puimedon)

We also decided to give Kubernetes integration higher priority and are now
brain storming
design options to integrate Kubernetes and Kuryr which means integrating
Kubernetes with Neutron (OpenStack networking abstraction).
If you have any idea in that area or done something similar (or in the
middle of doing it)
Please share it in this Etherpad [2].
Our goal is to expose the different ways to do this to the user and find
the best common
method.

We are also starting to approach the Kuryr-Magnum integration and nested
containers and
hope to achieve some progress on this by the end of the release.

Thanks everyone that contribute and help, its greatly appreciated by all of
us!
If you would like to join , feel free to step in our IRC channel at
freenode #openstack-kuryr
Join the IRC meeting [3] or just email me with any question/idea.

Thanks
Gal.

[1]
http://eavesdrop.openstack.org/meetings/kuryr/2015/kuryr.2015-12-29-03.01.html
[2] https://etherpad.openstack.org/p/kuryr_k8s
[3] https://wiki.openstack.org/wiki/Meetings/Kuryr
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] why not abandon launchpad milestones and/or blueprints?

2015-12-28 Thread Kirill Zaitsev
One more argument in favour of abandoning use of milestones is that they do not 
work well with new stable branch release structure and us having multiple 
repositories rely on one launchpad. We might have murano ver 1.0.5 and 
murano-dashboard ver 1.0.3, which would be totally fine under current 
stable-branch release scheme and really weird in launchpad. I.e. have several 
active milestones or have only the latest milestone active. Or we would have to 
make unnecessary releases, to keep tags in repositories synced (which I also do 
not think is a nice thing to do).

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 22 December 2015 at 17:18:25, Kirill Zaitsev (kzait...@mirantis.com) wrote:

Hi all. A couple of meetings ago I brought this up and promised to start a 
discussion in the ML.
There are two ideas behind this letter:

1st) Since we (and openstack at large) started using reno for release notes — 
launchpad milestones became redundant as a tracking tool of what have been done 
during development of a certain version of an app.
We might still use milestones for what we’re planning to do during a certain 
period of development, but in my opinion it never really worked, since dozens 
of open/in-progress bugs get transferred at release time to the next milestone.

I’d like to discuss the idea to stop using milestones on l-pad and just target 
bugs/bps to series.

+1 from me on the idea as I don’t see milestones being useful anymore

2d) We currently have 3 ways to track something we’d like to implement: 
wishlist-bug, blueprint, spec. A spec always require a blueprint, but a 
blueprint doesn’t always require a spec.

The idea is to minimise the number of tracking tools we use here and to stop 
using blueprints altogether. For small features this would mean assigning a 
wishlist-level bug. And for large features we should file a spec anyway and 
probably a specially tagged bug.

Pros: simpler more streamlined release/bug/feature management. One place to 
search for all functionality.
Cons: we would have to write Closes-Bug, which is kind of misleading. We 
wouldn’t be able to track dependencies between bugs the same way we now do for 
bps.

I don’t have a strong opinion on this one, so I would love to hear out some 
opinions on this one.

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev